• Title/Summary/Keyword: Image-based localization

Search Result 258, Processing Time 0.028 seconds

A Study on the Wavelet based Still Image Transmission over the Wireless Channel (무선채널환경에서 웨이블릿 기반 정지영상 전송에 관한 연구)

  • Nah, Won;Baek, Joong-Hwan
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.179-182
    • /
    • 2001
  • This paper has been studied a wavelet based still image transmission over the wireless channel. EZW(Embedded Zerotree Wavelet) is an efficient and scalable wavelet based image coding technique, which provides progressive transfer of signal resulted in multi-resolution representation. It reduces therefore the reduce cost of storage media. Although EZW has many advantages, it is very sensitive on error. Because coding are performed in subband by subband, and it uses arithmetic coding which is a kind of variable length coding. Therefore only 1∼2bit error may degrade quality of the entire image. So study of error localization and recovery are required. This paper investigates the use of reversible variable length codes(RVLC) and data partitioning. RVLC are known to have a superior error recovery property due to their two-way decoding capability and data partitioning is essential to applying RVLC. In this work, we show that appropriate data partitioning length for each SNR(Signal-to-Noise Power Ratio) and error localization in wireless channel.

  • PDF

Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation (스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술)

  • Park, Soon-Yong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

An Approach to 3D Object Localization Based on Monocular Vision

  • Jung, Sung-Hoon;Jang, Do-Won;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1658-1667
    • /
    • 2008
  • Reconstruction of 3D objects from a single view image is generally an ill-posed problem because of the projection distortion. A monocular vision based 3D object localization method is proposed in this paper, which approximates an object on the ground to a simple bounding solid and works automatically without any prior information about the object. A spherical or cylindrical object determined based on a circularity measure is approximated to a bounding cylinder, while the other general free-shaped objects to a bounding box or a bounding cylinder appropriately. For a general object, its silhouette on the ground is first computed by back-projecting its projected image in image plane onto the ground plane and then a base rectangle on the ground is determined by using the intuition that touched parts of the object on the ground should appear at lower part of the silhouette. The base rectangle is adjusted and extended until a derived bounding box from it can enclose the general object sufficiently. Height of the bounding box is also determined enough to enclose the general object. When the general object looks like a round-shaped object, a bounding cylinder that encloses the bounding box minimally is selected instead of the bounding box. A bounding solid can be utilized to localize a 3D object on the ground and to roughly estimate its volume. Usefulness of our approach is presented with experimental results on real image objects and limitations of our approach are discussed.

  • PDF

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

Feature Voting for Object Localization via Density Ratio Estimation

  • Wang, Liantao;Deng, Dong;Chen, Chunlei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6009-6027
    • /
    • 2019
  • Support vector machine (SVM) classifiers have been widely used for object detection. These methods usually locate the object by finding the region with maximal score in an image. With bag-of-features representation, the SVM score of an image region can be written as the sum of its inside feature-weights. As a result, the searching process can be executed efficiently by using strategies such as branch-and-bound. However, the feature-weight derived by optimizing region classification cannot really reveal the category knowledge of a feature-point, which could cause bad localization. In this paper, we represent a region in an image by a collection of local feature-points and determine the object by the region with the maximum posterior probability of belonging to the object class. Based on the Bayes' theorem and Naive-Bayes assumptions, the posterior probability is reformulated as the sum of feature-scores. The feature-score is manifested in the form of the logarithm of a probability ratio. Instead of estimating the numerator and denominator probabilities separately, we readily employ the density ratio estimation techniques directly, and overcome the above limitation. Experiments on a car dataset and PASCAL VOC 2007 dataset validated the effectiveness of our method compared to the baselines. In addition, the performance can be further improved by taking advantage of the recently developed deep convolutional neural network features.

Development of a Vehicle Positioning Algorithm Using Reference Images (기준영상을 이용한 차량 측위 알고리즘 개발)

  • Kim, Hojun;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1131-1142
    • /
    • 2018
  • The autonomous vehicles are being developed and operated widely because of the advantages of reducing the traffic accident and saving time and cost for driving. The vehicle localization is an essential component for autonomous vehicle operation. In this paper, localization algorithm based on sensor fusion is developed for cost-effective localization using in-vehicle sensors, GNSS, an image sensor and reference images that made in advance. Information of the reference images can overcome the limitation of the low positioning accuracy that occurs when only the sensor information is used. And it also can acquire estimated result of stable position even if the car is located in the satellite signal blockage area. The particle filter is used for sensor fusion that can reflect various probability density distributions of individual sensors. For evaluating the performance of the algorithm, a data acquisition system was built and the driving data and the reference image data were acquired. Finally, we can verify that the vehicle positioning can be performed with an accuracy of about 0.7 m when the route image and the reference image information are integrated with the route path having a relatively large error by the satellite sensor.

Geometric Formulation of Rectangle Based Relative Localization of Mobile Robot (이동 로봇의 상대적 위치 추정을 위한 직사각형 기반의 기하학적 방법)

  • Lee, Joo-Haeng;Lee, Jaeyeon;Lee, Ahyun;Kim, Jaehong
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.9-18
    • /
    • 2016
  • A rectangle-based relative localization method is proposed for a mobile robot based on a novel geometric formulation. In an artificial environment where a mobile robot navigates, rectangular shapes are ubiquitous. When a scene rectangle is captured using a camera attached to a mobile robot, localization can be performed and described in the relative coordinates of the scene rectangle. Especially, our method works with a single image for a scene rectangle whose aspect ratio is not known. Moreover, a camera calibration is unnecessary with an assumption of the pinhole camera model. The proposed method is largely based on the theory of coupled line cameras (CLC), which provides a basis for efficient computation with analytic solutions and intuitive geometric interpretation. We introduce the fundamentals of CLC and describe the proposed method with some experimental results in simulation environment.

Efficient Object Localization using Color Correlation Back-projection (칼라 상관관계 역투영법을 적용한 효율적인 객체 지역화 기법)

  • Lee, Yong-Hwan;Cho, Han-Jin;Lee, June-Hwan
    • Journal of Digital Convergence
    • /
    • v.14 no.5
    • /
    • pp.263-271
    • /
    • 2016
  • Localizing an object in image is a common task in the field of computer vision. As the existing methods provide a detection for the single object in an image, they have an utilization limit for the use of the application, due to similar objects are in the actual picture. This paper proposes an efficient method of object localization for image recognition. The new proposed method uses color correlation back-projection in the YCbCr chromaticity color space to deal with the object localization problem. Using the proposed algorithm enables users to detect and locate primary location of object within the image, as well as candidate regions can be detected accurately without any information about object counts. To evaluate performance of the proposed algorithm, we estimate success rate of locating object with common used image database. Experimental results reveal that improvement of 21% success ratio was observed. This study builds on spatially localized color features and correlation-based localization, and the main contribution of this paper is that a different way of using correlogram is applied in object localization.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

Lane Positioning in Highways Based on Road-sign Tracking by Kalman Filter (칼만필터 기반의 도로표지판 추적을 이용한 차량의 횡방향 위치인식)

  • Lee, Jaehong;Kim, Hakil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.3
    • /
    • pp.50-59
    • /
    • 2014
  • This paper proposes a method of localization of vehicle especially the horizontal position for the purpose of recognizing the driving lane. Through tracking road signs, the relative position between the vehicle and the sign is calculated and the absolute position is obtained using the known information from the regulation for installation. The proposed method uses Kalman filter for road sign tracking and analyzes the motion using the pinhole camera model. In order to classify the road sign, ORB(Oriented fast and Rotated BRIEF) features from the input image and DB are matched. From the absolute position of the vehicle, the driving lane is recognized. The Experiments are performed on videos from the highway driving and the results shows that the proposed method is able to compensate the common GPS localization errors.