• Title/Summary/Keyword: Object Localization

Search Result 175, Processing Time 0.033 seconds

Localization Algorithm for Moving Objects Based on Maximum Measurement Value in WPAN (WPAN에서 최대 측정거리 값을 이용한 이동객체 위치추정 보정 알고리즘)

  • Choi, Chang Yong;Lee, Dong Myung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.5
    • /
    • pp.407-412
    • /
    • 2014
  • Concerns and demands for the Location Based Services (LBS) using Global Positioning System (GPS) and Wi-Fi are largely increased in the world in the present. In some experimental results, it was noted that many errors are frequently occurred when the distances between an anchor node and a mobile node acre measured in indoor localization environment of Wireless Personal Area Network (WPAN). In this paper, localization compensation algorithm based on maximum measurement value ($LCA_{MMV}$) for moving objects in WPAN is proposed, and the performance of the algorithm is analyzed by experiments on three scenarios for movement of mobile nodes. From the experiments, it was confirmed that the average localization accuracy of suggested algorithm was more increased than Symmetric Double-Sided Two-Way Ranging (SDS-TWR) and triangulation as average 40.9cm, 77.6cm and 6.3cm, respectively on scenario 1-3.

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

Cooperative Control of Mobile Robot for Carrying Object (물체 운반을 위한 다수 로봇의 협조제어)

  • Jeong, Hee-In;Hoang, Nhat-Minh;Woo, Chang-Jun;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.139-145
    • /
    • 2015
  • This paper proposed a method of cooperative control of three mobile robots for carrying an object placed on a floor together. Each robot moves to the object independently from its location to a pre-designated location for grasping the object stably. After grasping the common object, the coordination among the robots has been achieved by a master-slave mode. That is, a trajectory planning has been done for the master robot and the distances form the master robot to the two slave robots have been kept constant during the carrying operation. The localization for mobile robots has been implemented using the encoder data and inverse kinematics since the whole system does not have the slippage as much as a single mobile robot. Before the carrying operation, the lifting operations are implemented using the manipulators attached on the top of the mobile robots cooperatively. The real cooperative lifting and carrying operations are implanted to show the feasibility of the master-slave mode control based on the kinematics using the mobile manipulators developed for this research.

An Effective Moving Cast Shadow Removal in Gray Level Video for Intelligent Visual Surveillance (지능 영상 감시를 위한 흑백 영상 데이터에서의 효과적인 이동 투영 음영 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.4
    • /
    • pp.420-432
    • /
    • 2014
  • In detection of moving objects from video sequences, an essential process for intelligent visual surveillance, the cast shadows accompanying moving objects are different from background so that they may be easily extracted as foreground object blobs, which causes errors in localization, segmentation, tracking and classification of objects. Most of the previous research results about moving cast shadow detection and removal usually utilize color information about objects and scenes. In this paper, we proposes a novel cast shadow removal method of moving objects in gray level video data for visual surveillance application. The proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the corresponding regions in the background scene. Then, the product of the outcomes of application determines moving object blob pixels from the blob pixels in the foreground mask. The minimal rectangle regions containing all blob pixles classified as moving object pixels are extracted. The proposed method is simple but turns out practically very effective for Adative Gaussian Mixture Model-based object detection of intelligent visual surveillance applications, which is verified through experiments.

Object Localization in Sensor Network using the Infrared Light based Sector and Inertial Measurement Unit Information (적외선기반 구역정보와 관성항법장치정보를 이용한 센서 네트워크 환경에서의 물체위치 추정)

  • Lee, Min-Young;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1167-1175
    • /
    • 2010
  • This paper presents the use of the inertial measurement unit information and the infrared sector information for getting the position of an object. Travel distance is usually calculated from the double integration of the accelerometer output with respect to time; however, the accumulated errors due to the drift are inevitable. The orientation change of the accelerometer also causes error because the gravity is added to the measured acceleration. Unless three axis orientations are completely identified, the accelerometer alone does not provide correct acceleration for estimating the travel distance. We propose a way of minimizing the error due to the change of the orientation. In order to reduce the accumulated error, the infrared sector information is fused with the inertial measurement unit information. Infrared sector information has highly deterministic characteristics, different from RFID. By putting several infrared emitters on the ceiling, the floor is divided into many different sectors and each sector is set to have a unique identification. Infrared light based sector information tells the sector the object is in, but the size of the uncertainty is too large if only the sector information is used. This paper presents an algorithm which combines both the inertial measurement unit information and the sector information so that the size of the uncertainty becomes smaller. It also introduces a framework which can be used with other types of the artificial landmarks. The characteristics of the developed infrared light based sector and the proposed algorithm are verified from the experiments.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

Improved ultrasonic beacon system for indoor localization

  • Shin, Su-Young;Choi, Jong-Suk;Kim, Byoung-Hoon;Park, Mi-Gnong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1775-1780
    • /
    • 2005
  • One of the most important factors so that mobile objects can achieve their purpose is the information about their positions. In this paper, we propose an improved beacon system, to which ultrasonic sensors are attached, for the indoor localization of mobile objects. We have researched so that it can cover the wider space and estimate more accurate positions than the existent beacon systems. The existent beacon systems have the constraint that one beacon cannot cover wide area since ultrasonic sensors have limits in the angle of signal (beam-angle) on which their signal strength depends. Hence, we used the active beacon which consists of a pan-tilt mechanism and a beacon module. The active beacon system can always aim at mobile objects in order to transmit the strongest signal of the ultrasonic sensors into the objects using the pan-tilt mechanism. In addition, this system is inexpensive because it can decrease the number of beacons by about a half of the beacons of the existent system. Finally, the results show what is the difference between the active beacon system and existent beacon systems, and how accurate it is.

  • PDF

Navigation System of UUV Using Multi-Sensor Fusion-Based EKF (융합된 다중 센서와 EKF 기반의 무인잠수정의 항법시스템 설계)

  • Park, Young-Sik;Choi, Won-Seok;Han, Seong-Ik;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.7
    • /
    • pp.562-569
    • /
    • 2016
  • This paper proposes a navigation system with a robust localization method for an underwater unmanned vehicle. For robust localization with IMU (Inertial Measurement Unit), a DVL (Doppler Velocity Log), and depth sensors, the EKF (Extended Kalman Filter) has been utilized to fuse multiple nonlinear data. Note that the GPS (Global Positioning System), which can obtain the absolute coordinates of the vehicle, cannot be used in the water. Additionally, the DVL has been used for measuring the relative velocity of the underwater vehicle. The DVL sensor measures the velocity of an object by using Doppler effects, which cause sound frequency changes from the relative velocity between a sound source and an observer. When the vehicle is moving, the motion trajectory to a target position can be recorded by the sensors attached to the vehicle. The performance of the proposed navigation system has been verified through real experiments in which an underwater unmanned vehicle reached a target position by using an IMU as a primary sensor and a DVL as the secondary sensor.

Concurrent Mapping and Localization using Range Sonar in Small AUV, SNUUVI

  • Hwang Arom;Seong Woojae;Choi Hang Soon;Lee Kyu Yuel
    • Journal of Ship and Ocean Technology
    • /
    • v.9 no.4
    • /
    • pp.23-34
    • /
    • 2005
  • Increased usage of AUVs has led to the development of alternative navigational methods that use the acoustic beacons and dead reckoning. This paper describes a concurrent mapping and localization (CML) scheme that uses range sonars mounted on SNUUV­I, which is a small test AUV developed by Seoul National University. The CML is one of such alternative navigation methods for measuring the environment that the vehicle is passing through. In addition, it is intended to provide relative position of AUV by processing the data from sonar measurements. A technique for CML algorithm which uses several ranging sonars is presented. This technique utilizes an extended Kalman filter to estimate the location of the AUV. In order for the algorithm to work efficiently, the nearest neighbor standard filter is introduced as the algorithm of data association in the CML for associating the stored targets the sonar returns at each time step. The proposed CML algorithm is tested by simulations under various conditions. Experiments in a towing tank for one dimensional navigation are conducted and the results are presented. The results of the simulation and experiment show that the proposed CML algorithm is capable of estimating the position of the vehicle and the object and demonstrates that the algorithm will perform well in the real environment.

Detection of Speaker Position for Robot Using HRTF (머리전달함수를 이용한 로봇의 화자 위치 추정)

  • Hwang, Sung-Mook;Park, Youn-Sik;Park, Young-Jin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.637-640
    • /
    • 2005
  • We propose a sound source localization method using the Head-Related-Transfer-Function (HRTF) to be implemented in a given platform. HRTFs contain not only the information regarding proper time delays but also phase and magnitude distortions due to diffraction and scattering by the shading object. Therefore, a set of HRTFs for any given platform provides a substantial amount of information as to the whereabouts of the source. In this study, we introduce new phase criterion in order to find the sound source location in accordance with the HRTF database empirically obtained in an anechoic chamber with the given platform. Using this criterion, we analyze the estimation performance of the proposed method in a household environment.

  • PDF