• Title/Summary/Keyword: Mobile Object Tracking

Search Result 160, Processing Time 0.029 seconds

Object Tracking Algorithm for Intelligent Robot using Sound Source Tracking Sensor Network (음원 센서네트워크를 이용한 지능형 로봇의 목표물 추적 알고리즘)

  • Jang, In-Hun;Park, Kyoung-Jin;Yang, Hyun-Chang;Lee, Jong-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.10
    • /
    • pp.983-989
    • /
    • 2007
  • Most of life thing including human being have tendency of reaction with inherently their own pattern against environmental change caused by such as light, sound, smell etc. Especially, a sense of direction often works as a very important factor in such reaction. Actually, human or animal lift that can react instantly to a stimulus determine their action with a sense of direction to a stimulant. In this paper, we try to propose how to give a sense of direction to a robot using sound being representative stimulant, and tracking sensors being able to detect the direction of such sound source. We also try to propose how to determine the relative directions among devices or robots using the digital compass and the RSSI on wireless network.

Development of Cultural Contents using Auger Reality Based Markerless Tracking

  • Kang, Hanbyeol;Park, DaeWon;Lee, SangHyun
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.57-65
    • /
    • 2016
  • This paper aims to improve the quality of cultural experience by providing a three - dimensional guide service that enables users to experience themselves without additional guides and cultural commentators using the latest mobile IT technology to enhance understanding of cultural heritage. In this paper, we propose a method of constructing cultural contents based on location information such as user / cultural heritage using markerless tracking based augmented reality and GPS. We use marker detection technology and markerless tracking technology to recognize smart augmented reality object accurately and accurate recognition according to the state of cultural heritage, and also use Android's Google map to locate the user. The purpose of this paper is to produce content for introducing cultural heritage using GPS and augmented reality based on Android. It can be used in combination with various objects beyond the limitation of existing augmented reality contents.

Development of the FishBowl Game Employing a Tabletop Tiled Display Coupling With Mobile Interfaces (모바일 인터페이스와 테이블탑 타일드 디스플레이를 연동한 FishBowl 게임 개발)

  • Kong, Young-Sik;Park, Kyoung-Shin
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.57-66
    • /
    • 2010
  • In the prior works on tabletop systems, a projection-based tabletop surface is mostly used to display computer images, and the participants interact with the display surface by hand multi-touching or using some tangible objects. In this research, however, we developed the FishBowl game that employs a scalable tabletop tiled display with infrared camera tracking coupled with PDA mobile interfaces. The focus of this game is to enhance user interactivity and realistic experience by coupling the high-resolution tabletop virtual environment and PDA mobile interface. This paper describes the game design followed by the system design and its detailed implementations. It also discusses the system usability and recommendation for its improvements after interviewing game players and then concludes with future research directions.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Realtime Human Object Segmentation Using Image and Skeleton Characteristics (영상 특성과 스켈레톤 분석을 이용한 실시간 인간 객체 추출)

  • Kim, Minjoon;Lee, Zucheul;Kim, Wonha
    • Journal of Broadcast Engineering
    • /
    • v.21 no.5
    • /
    • pp.782-791
    • /
    • 2016
  • The object segmentation algorithm from the background could be used for object recognition and tracking, and many applications. To segment objects, this paper proposes a method that refer to several initial frames with real-time processing at fixed camera. First we suggest the probability model to segment object and background and we enhance the performance of algorithm analyzing the color consistency and focus characteristic of camera for several initial frames. We compensate the segmentation result by using human skeleton characteristic among extracted objects. Last the proposed method has the applicability for various mobile application as we minimize computing complexity for real-time video processing.

Embedded Marker System for Smart Object Recognition and Tracking in Mobile Augmented Reality (모바일 증강현실에서 스마트 오브젝트 인식 및 트래킹을 위한 임베디드 마커 시스템)

  • Kim, Hye-Jin;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.131-136
    • /
    • 2007
  • 본 논문에서는 모바일 증강현실에서 스마트 오브젝트 인식 및 트래킹을 위한 임베디드 마커 시스템을 제안한다. 기존의 증강 현실 연구에서 주로 사용하는 마커는 임의의 패턴을 포함하고 대상 오브젝트와는 분리되어 있다. 이는 부자연스러운 시각적 장애 요인으로 작용한다. 또한 특정한 마커를 사용하기 위해 학습 과정을 거친 후 그 결과를 인식 모듈에서 일일이 등록해야 하는 번거로움이 있다. 이러한 문제점을 해결하기 위해 제안하는 임베디드 마커는 디스플레이 장치의 유무에 따라 고정형 또는 가변 형으로 분류된 스마트 오브젝트의 특성을 고려하여 오브젝트와 마커를 결합한다. 또한 통합된 학습과 인식 모듈을 통해 오브젝트의 추가 및 시스템 확장을 용이하게 한다. 제안된 시스템은 스마트 홈 테스트베드인 ubiHome 에 적용되었다. 또한 사용 성 평가를 통해 그 효용성을 분석하였다. 이러한 임베디드 마커를 사용하면 사용자는 보다 직관적으로 마커의 용도를 예측할 수 있고 대상물과의 시선을 일치시켜 자연스러운 증강현실을 경험할 수 있을 것으로 기대된다.

  • PDF

Odor Source Tracking of Mobile Robot with Vision and Odor Sensors (비전과 후각 센서를 이용한 이동로봇의 냄새 발생지 추적)

  • Ji, Dong-Min;Lee, Jeong-Jun;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.698-703
    • /
    • 2006
  • This paper proposes an approach to search for the odor source using an autonomous mobile robot equipped with vision and odor sensors. The robot is initially navigating around the specific area with vision system until it looks for an object in the camera image. The robot approaches the object found in the field of view and checks it with the odor sensors if it is releasing odor. If so, the odor is classified and localized with the classification algorithm based on neural network The AMOR(Autonomous Mobile Olfactory Robot) was built up and used for the experiments. Experimental results on the classification and localization of odor sources show the validity of the proposed algorithm.

Multi Modal Sensor Training Dataset for the Robust Object Detection and Tracking in Outdoor Surveillance (MMO (Multi Modal Outdoor) Dataset) (실외 경비 환경에서 강인한 객체 검출 및 추적을 위한 실외 멀티 모달 센서 기반 학습용 데이터베이스 구축)

  • Noh, DongKi;Yang, Wonkeun;Uhm, Teayoung;Lee, Jaekwang;Kim, Hyoung-Rock;Baek, SeungMin
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1006-1018
    • /
    • 2020
  • Dataset is getting more import to develop a learning based algorithm. Quality of the algorithm definitely depends on dataset. So we introduce new dataset over 200 thousands images which are fully labeled multi modal sensor data. Proposed dataset was designed and constructed for researchers who want to develop detection, tracking, and action classification in outdoor environment for surveillance scenarios. The dataset includes various images and multi modal sensor data under different weather and lighting condition. Therefor, we hope it will be very helpful to develop more robust algorithm for systems equipped with difference kinds of sensors in outdoor application. Case studies with the proposed dataset are also discussed in this paper.

A Study on ISpace with Distributed Intelligent Network Devices for Multi-object Recognition (다중이동물체 인식을 위한 분산형 지능형네트워크 디바이스로 구현된 공간지능화)

  • Jin, Tae-Seok;Kim, Hyun-Deok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.950-953
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd.

  • PDF

Study on the Improved Target Tracking for the Collaborative Control of the UAV-UGV (UAV-UGV의 협업제어를 위한 향상된 Target Tracking에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.450-456
    • /
    • 2013
  • This paper suggests the target tracking method improved for the collaboration of the quad rotor type UAV (Unmanned Aerial Vehicle) and omnidirectional Unmanned Ground Vehicle. If UAV shakes or UGV moves rapidly, the existing method generates a phenomenon that the tracking object loses the tracking target. To solve the problems, we propose an algorithm that can track continually when they lose the target. The proposed algorithm stores the vector of the landmark. And if the target was lost, the control signal was inputted so that the landmark could move continuously to the direction running out. Prior to the experiment, Proportional and integral control were used in 4 motors in order to calibrate the Heading value of the omnidirectional mobile robot. The landmark of UGV was recognized as the camera adhered to UAV and the target was traced through the proportional-integral-derivative control. Finally, the performance of the target tracking controller and proposed algorithm was evaluated through the experiment.