• Title/Summary/Keyword: Object Localization

Search Result 174, Processing Time 0.026 seconds

Sonar-based yaw estimation of target object using shape prediction on viewing angle variation with neural network

  • Sung, Minsung;Yu, Son-Cheol
    • Ocean Systems Engineering
    • /
    • v.10 no.4
    • /
    • pp.435-449
    • /
    • 2020
  • This paper proposes a method to estimate the underwater target object's yaw angle using a sonar image. A simulator modeling imaging mechanism of a sonar sensor and a generative adversarial network for style transfer generates realistic template images of the target object by predicting shapes according to the viewing angles. Then, the target object's yaw angle can be estimated by comparing the template images and a shape taken in real sonar images. We verified the proposed method by conducting water tank experiments. The proposed method was also applied to AUV in field experiments. The proposed method, which provides bearing information between underwater objects and the sonar sensor, can be applied to algorithms such as underwater localization or multi-view-based underwater object recognition.

3D Object Recognition for Localization of Outdoor Robotic Vehicles (실외 주행 로봇의 위치 추정을 위한 3 차원 물체 인식)

  • Baek, Seung-Min;Kim, Jae-Woong;Lee, Jang-Won;Zhaojin, Lu;Lee, Suk-Han
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.200-204
    • /
    • 2008
  • In this paper, to solve localization problem for out-door navigation of robotic vehicles, a particle filter based 3D object recognition framework that can estimate the pose of a building or its entrance is presented. A particle filter framework of multiple evidence fusion and model matching in a sequence of images is presented for robust recognition and pose estimation of 3D objects. The proposed approach features 1) the automatic selection and collection of an optimal set of evidences 2) the derivation of multiple interpretations, as particles representing possible object poses in 3D space, and the assignment of their probabilities based on matching the object model with evidences, and 3) the particle filtering of interpretations in time with the additional evidences obtained from a sequence of images. The proposed approach has been validated by the stereo-camera based experimentation of 3D object recognition and pose estimation, where a combination of photometric and geometric features are used for evidences.

  • PDF

A Scheme for Matching Satellite Images Using SIFT (SIFT를 이용한 위성사진의 정합기법)

  • Kang, Suk-Chen;Whoang, In-Teck;Choi, Kwang-Nam
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.13-23
    • /
    • 2009
  • In this paper we propose an approach for localizing objects in satellite images. Our method exploits matching features based on description vectors. We applied Scale Invariant Feature Transform (SIFT) to object localization. First, we find keypoints of the satellite images and the objects and generate description vectors of the keypoints. Next, we calculate the similarity between description vectors, and obtain matched keypoints. Finally, we weight the adjacent pixels to the keypoints and determine the location of the matched object. The experiments of object localization by using SIFT show good results on various scale and affine transformed images. In this paper the proposed methods use Google Earth satellite images.

  • PDF

Local and Global Information Exchange for Enhancing Object Detection and Tracking

  • Lee, Jin-Seok;Cho, Shung-Han;Oh, Seong-Jun;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.5
    • /
    • pp.1400-1420
    • /
    • 2012
  • Object detection and tracking using visual sensors is a critical component of surveillance systems, which presents many challenges. This paper addresses the enhancement of object detection and tracking via the combination of multiple visual sensors. The enhancement method we introduce compensates for missed object detection based on the partial detection of objects by multiple visual sensors. When one detects an object or more visual sensors, the detected object's local positions transformed into a global object position. Local and global information exchange allows a missed local object's position to recover. However, the exchange of the information may degrade the detection and tracking performance by incorrectly recovering the local object position, which propagated by false object detection. Furthermore, local object positions corresponding to an identical object can transformed into nonequivalent global object positions because of detection uncertainty such as shadows or other artifacts. We improved the performance by preventing the propagation of false object detection. In addition, we present an evaluation method for the final global object position. The proposed method analyzed and evaluated using case studies.

Quickly Map Renewal through IPM-based Image Matching with High-Definition Map (IPM 기반 정밀도로지도 매칭을 통한 지도 신속 갱신 방법)

  • Kim, Duk-Jung;Lee, Won-Jong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1163-1175
    • /
    • 2021
  • In autonomous driving, road markings are an essential element for object tracking, path planning and they are able to provide important information for localization. This paper presents an approach to update and measure road surface markers with HD maps as well as matching using inverse perspective mapping. The IPM removes perspective effects from the vehicle's front camera image and remaps them to the 2D domain to create a bird-view region to fit with HD map regions. In addition, letters and arrows such as stop lines, crosswalks, dotted lines, and straight lines are recognized and compared to objects on the HD map to determine whether they are updated. The localization of a newly installed object can be obtained by referring to the measurement value of the surrounding object on the HD map. Therefore, we are able to obtain high accuracy update results with very low computational costs and low-cost cameras and GNSS/INS sensors alone.

Relative localization errors: The effect of reference location on the errors (상대적인 위치지각의 왜곡: 참조자극의 위치가 왜곡에 미치는 영향)

  • Li, Hyung-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.3
    • /
    • pp.15-24
    • /
    • 2004
  • The perceived position of a flashing target object is generally biased towards the direction of eye movement when there is no reference around the target. Current research examined the localization accuracy of a flashing target relative to a static reference. The perceived location of the target relative to the reference was distorted and the pattern of perceptual distortion systematically depended on the position of the reference relative to the target. This kind of result was consistently observed regardless of the distance between the reference and the target and direction of pursuit eye movement. We have discussed how these results could he explained by the theories previously suggested to explain the localization of objects.

  • PDF

Indoor Localization Method using Single Inertial and Ultrasonic Sensors (단일 관성 센서와 초음파를 이용한 실내 위치추정 방법)

  • Ryu, Seoung-Bum;Song, Chang-Woo;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.4
    • /
    • pp.115-122
    • /
    • 2010
  • Most of intelligent services provided today work based on the users' location. Numerous devices for indoor localization services have their own characteristic functions and operating systems, we need the interoperability and diversity of middleware to connect and control these devices. The indoor localization method using existing inertial sensors are relatively less efficient because of additional cost according to the size of space. Accordingly, the indoor user localization method proposed in this study supports integrated services using OSGi framework, an open source project, and solves problems in inertial sensor based on accurate distance to a specific object measured using ultrasonic sensor. Furthermore, it reduces errors resulting from difference in response rate by adding the reliability item.

Fixed-Point Modeling and Performance Analysis of a SIFT Keypoints Localization Algorithm for SoC Hardware Design (SoC 하드웨어 설계를 위한 SIFT 특징점 위치 결정 알고리즘의 고정 소수점 모델링 및 성능 분석)

  • Park, Chan-Ill;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.49-59
    • /
    • 2008
  • SIFT(Scale Invariant Feature Transform) is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vortices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3-D image constructions, and its most computation-intensive stage is a keypoint localization. In this paper, we develope a fixed-point model of the keypoint localization and propose its efficient hardware architecture for embedded applications. The bit-length of key variables are determined based on two performance measures: localization accuracy and error rate. Comparing with the original algorithm (implemented in Matlab), the accuracy and error rate of the proposed fixed point model are 93.57% and 2.72% respectively. In addition, we found that most of missing keypoints appeared at the edges of an object which are not very important in the case of keypoints matching. We estimate that the hardware implementation will give processing speed of $10{\sim}15\;frame/sec$, while its fixed point implementation on Pentium Core2Duo (2.13 GHz) and ARM9 (400 MHz) takes 10 seconds and one hour each to process a frame.

Fuzzy Based Mobile Robot Control with GUI Environment (GUI환경을 갖는 퍼지기반 이동로봇제어)

  • Hong, Seon-Hack
    • 전자공학회논문지 IE
    • /
    • v.43 no.4
    • /
    • pp.128-135
    • /
    • 2006
  • This paper proposes the control method of fuzzy based sensor fusion by using the self localization of environment, position data by dead reckoning of the encoder and world map from sonic sensors. The proposed fuzzy based sensor fusion system recognizes the object and extracts features such as edge, distance and patterns for generating the world map and self localization. Therefore, this paper has developed fuzzy based control of mobile robot with experimentations in a corridor environment.

AoA Localization System based on Zigbee Experimentation and Realization (Zigbee 기반 AoA 위치인식 시스템 실험 및 구현)

  • Cho, Ho-Seong;Park, Chul-Young;Park, Dae-Heon;Park, Jang-Woo
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.1
    • /
    • pp.83-90
    • /
    • 2011
  • The technique of localization is the core technology for information exchange or environment monitering to measure the position of an object or person. Today, the techniques of localization have been studied extensively and it can be applied to logistics, medical, robotics, etc. But, a lot of money to apply technique of localization is needed. Hence in this paper, we proposed AoA localization system based on Zigbee at low cost. The System measured the RSSI value by rotating the directional antenna using a step motor and Zigbee module. When the measured RSSI is the largest, the receiver measures the angles from beacons which are located at the corners with the rotating angle of a stepping motor and the position of the receiver will be calculated by appling AoA localization method. The measured results show an error about 35~36 cm.