• Title/Summary/Keyword: Object identification and localization

Search Result 8, Processing Time 3.109 seconds

Object Identification and Localization for Image Recognition (이미지 인식을 위한 객체 식별 및 지역화)

  • Lee, Yong-Hwan;Park, Je-Ho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.11 no.4
    • /
    • pp.49-55
    • /
    • 2012
  • This paper proposes an efficient method of object identification and localization for image recognition. The new proposed algorithm utilizes correlogram back-projection in the YCbCr chromaticity components to handle the problem of sub-region querying. Utilizing similar spatial color information enables users to detect and locate primary location and candidate regions accurately, without the need for additional information about the number of objects. Comparing this proposed algorithm to existing methods, experimental results show that improvement of 21% was observed. These results reveal that color correlogram is markedly more effective than color histogram for this task. Main contribution of this paper is that a different way of treating color spaces and a histogram measure, which involves information on spatial color, are applied in object localization. This approach opens up new opportunities for object detection for the use in the area of interactive image and 2-D based augmented reality.

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

A Study on the RFID Tag-Floor Based Navigation (RFID 태그플로어 방식의 내비게이션에 관한 연구)

  • Choi Jung-Wook;Oh Dong-Ik;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.10
    • /
    • pp.968-974
    • /
    • 2006
  • We are moving into the era of ubiquitous computing. Ubiquitous Sensor Network (USN) is a base of such computing paradigm, where recognizing the identification and the position of objects is important. For the object identification, RFID tags are commonly used. For the object positioning, use of sensors such as laser and ultrasonic scanners is popular. Recently, there have been a few attempts to apply RFID technology in robot localization by replacing the sensors with RFID readers to achieve simpler and unified USN settings. However, RFID does not provide enough sensing accuracy for some USN applications such as robot navigation, mainly because of its inaccuracy in distance measurements. In this paper, we describe our approach on achieving accurate navigation using RFID. We solely rely on RFID mechanism for the localization by providing coordinate information through RFID tag installed floors. With the accurate positional information stored in the RFID tag, we complement coordinate errors accumulated during the wheel based robot navigation. We especially focus on how to distribute RFID tags (tag pattern) and how many to place (tag granularity) on the RFID tag-floor. To determine efficient tag granularities and tag patterns, we developed a simulation program. We define the error in navigation and use it to compare the effectiveness of the navigation. We analyze the simulation results to determine the efficient granularities and tag arrangement patterns that can improve the effectiveness of RFID navigation in general.

An Advanced RFID Localization Algorithm Based on Region Division and Error Compensation

  • Li, Junhuai;Zhang, Guomou;Yu, Lei;Wang, Zhixiao;Zhang, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.670-691
    • /
    • 2013
  • In RSSI-based RFID(Radio Frequency IDentification) indoor localization system, the signal path loss model of each sub-region is different from others in the whole localization area due to the influence of the multi-path phenomenon and other environmental factors. Therefore, this paper divides the localization area into many sub-regions and constructs separately the signal path loss model of each sub-region. Then an improved LANDMARC method is proposed. Firstly, the deployment principle of RFID readers and tags is presented for constructing localization sub-region. Secondly, the virtual reference tags are introduced to create a virtual signal strength space with RFID readers and real reference tags in every sub-region. Lastly, k nearest neighbor (KNN) algorithm is used to locate the target object and an error compensating algorithm is proposed for correcting localization result. The results in real application show that the new method enhances the positioning accuracy to 18.2% and reduces the time cost to 30% of the original LANDMARC method without additional tags and readers.

Impact force localization for civil infrastructure using augmented Kalman Filter optimization

  • Saleem, Muhammad M.;Jo, Hongki
    • Smart Structures and Systems
    • /
    • v.23 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • Impact forces induced by external object collisions can cause serious damages to civil engineering structures. While accurate and prompt identification of such impact forces is a critical task in structural health monitoring, it is not readily feasible for civil structures because the force measurement is extremely challenging and the force location is unpredictable for full-scale field structures. This study proposes a novel approach for identification of impact force including its location and time history using a small number of multi-metric observations. The method combines an augmented Kalman filter (AKF) and Genetic algorithm for accurate identification of impact force. The location of impact force is statistically determined in the way to minimize the AKF response estimate error at measured locations and then time history of the impact force is accurately constructed by optimizing the error co-variances of AKF using Genetic algorithm. The efficacy of proposed approach is numerically demonstrated using a truss and a plate model considering the presence of modelling error and measurement noises.

Object Localization in Sensor Network using the Infrared Light based Sector and Inertial Measurement Unit Information (적외선기반 구역정보와 관성항법장치정보를 이용한 센서 네트워크 환경에서의 물체위치 추정)

  • Lee, Min-Young;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1167-1175
    • /
    • 2010
  • This paper presents the use of the inertial measurement unit information and the infrared sector information for getting the position of an object. Travel distance is usually calculated from the double integration of the accelerometer output with respect to time; however, the accumulated errors due to the drift are inevitable. The orientation change of the accelerometer also causes error because the gravity is added to the measured acceleration. Unless three axis orientations are completely identified, the accelerometer alone does not provide correct acceleration for estimating the travel distance. We propose a way of minimizing the error due to the change of the orientation. In order to reduce the accumulated error, the infrared sector information is fused with the inertial measurement unit information. Infrared sector information has highly deterministic characteristics, different from RFID. By putting several infrared emitters on the ceiling, the floor is divided into many different sectors and each sector is set to have a unique identification. Infrared light based sector information tells the sector the object is in, but the size of the uncertainty is too large if only the sector information is used. This paper presents an algorithm which combines both the inertial measurement unit information and the sector information so that the size of the uncertainty becomes smaller. It also introduces a framework which can be used with other types of the artificial landmarks. The characteristics of the developed infrared light based sector and the proposed algorithm are verified from the experiments.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

CT Number Measurement of Residual Foreign Bodies in Face (안면부에 잔류된 다양한 이물질을 측정한 CT 계수)

  • Wee, Syeo Young;Choi, Hwan Jun;Kim, Mi Sun;Choi, Chang Yong
    • Archives of Plastic Surgery
    • /
    • v.35 no.4
    • /
    • pp.423-430
    • /
    • 2008
  • Purpose: Computed tomography theoretically should improve detection of foreign bodies and provide more information of adjacent soft tissues. And the CT scanner and PACS program proved to be an excellent instrument for detection and localization of most facial foreign bodies above certain minimum levels of detectability. The severity of injury in penetrating trauma to the face, it is often underestimated by physical examination. Diagnosis of a retained foreign object is always critical. Methods: From March, 2005 to February 2008 a study was done with 200 patients who had facial trauma. Axial and coronal CT images were obtained with a General Electric(Milwaukee, Wis) 9800 CT scanner at 130 kV, 90 mA, with a 2-mm section thickness and a $512{\times}512$ matrix. Results: Axial and coronal CT images at various window widths should be used as the first imaging modality to detect facial foreign bodies. The attenuation coefficients for the metallic and nonmetallic foreign bodies ranged from -437 to +3071 HU. As a general rule, metallic foreign bodies produced more Hounsfield artifacts than nonmetallic foreign bodies, thus providing a clue to their composition. All of the metallic foreign bodies were represented by a single peak and had a maximum attenuation coefficient of +3071 HU. Of the nonmetallic foreign bodies, glass had an attenuation coefficient that ranged from +105 to +2039, while plastic had a much lower coefficient that ranged from -62 to -35. wood had the lowest range of attenuation coefficients: -491 to -437. Conclusion: The PACS program allows one to distinguish metallic from nonmetallic foreign bodies and to individually identify the specific composition of many nonmetallic foreign bodies. This program does not, however, allow identification of the specific composition of a metallic foreign body. We recommend this type of software program for CT scanning of any patient with an injury to the face in which a foreign body is suspected.