• Title/Summary/Keyword: Tracking of Moving Object

Search Result 530, Processing Time 0.031 seconds

Reliable Time Synchronization Protocol in Sensor Networks (센서 네트워크에서 신뢰성 있는 시각 동기 프로토콜)

  • Hwang So-Young;Jung Yeon-Su;Baek Yun-Ju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.3A
    • /
    • pp.274-281
    • /
    • 2006
  • Sensor network applications need synchronized time extremely such as object tracking, consistent state updates, duplicate detection, and temporal order delivery. This paper describes reliable time synchronization protocol (RTSP) for wireless sensor networks. In the proposed method, synchronization error is decreased by creating hierarchical tree with lower depth and reliability is improved by maintaining and updating information of candidate parent nodes. The RTSP reduces recovery time and communication overheads comparing to TPSN when there are topology changes owing to moving of nodes, running out of energy and physical crashes. Simulation results show that RTSP has about 20% better performance than TPSN in synchronization accuracy. And the number of message in the RTSP is $20%{\sim}60%$ lower than that in the TPSN when nodes are failed in the network. In case of different transmission range of nodes, the communication overhead in the RTSP is reduced up to 40% than that in the TPSN at the maximum.

Design of a Location Management System in the Ubiquitous Computing Environments (유비쿼터스 컴퓨팅 환경에서의 위치 데이타 관리 시스템의 설계)

  • Lee, Ki-Young;Kim, Dong-Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.6
    • /
    • pp.115-121
    • /
    • 2007
  • Recently, Location Based Service including tracking and way-finding services has been activated rapidly in the ubiquitous computing environment. According as the ubiquitous computing environment is developed, various types of sensor to acquire various data including location of the moving object are used widely, and acquired sensor data becomes abundant. However, the existing location management system based on a single location sensor is difficult to support LBS efficiently in the ubiquitous computing environment. In this paper, therefore, we propose the location management system in the ubiquitous computing environment that can manage the location data and the various sensor data efficiently. In addition, the location management system adopts the core technology for efficiently storing and transferring a large-volume of various data such as location data and for efficiently processing the various requests from a variety of servers and sensors. Especially, our architecture that is presented in this paper can support context-aware services and autonomous services efficiently in the ubiquitous computing environment.

  • PDF

A Study on The Tracking and Analysis of Moving Object in MPEG Compressed domain (MPEG 압축 영역에서의 움직이는 객체 추적 및 해석)

  • 문수정;이준환;박동선
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.103-106
    • /
    • 2001
  • 본 논문에서는 MPEG2비디오 스트림에서 직접 얻을 수 있는 정보들을 활용하여 카메라의 움직임을 추정하여 이를 기반으로 하여 움직이는 객체를 추정하고자 한다. 이를 위해, 먼저 MPEG2의 움직임 벡터는 압축의 효율성 때문에 움직임의 예측이 순서적이지 못한데, 예측 프레임들의 속성을 이용하여 이를 광 플로우(Optical Flow)를 갖는 움직임 벡터(Motion Vector)로 변환하였다. 그리고 이러한 벡터들을 이용하여 카메라의 기본적인 움직임인 팬(Fan), 틸트(Tilt). 줌(Zoom) 등을 정의하였다. 이를 위하여 팬, 틸트-줌 카메라 모델의 매개변수와 같은 의미의 $\Delta$x, $\Delta$y, $\alpha$값을 정의하고자 움직임 벡터 성분의 Hough변환을 이용하여 $\Delta$x, $\Delta$y, $\alpha$값들을 구하였다. 또한 이러한 카메라 움직임(Camera Operation)은 시간적으로 연속적으로 발생하는 특징을 이용하여 각 프레임마다 구한 카메라의 움직임을 보정하였다. 마지막으로 움직이는 객체의 추정은 우선 사용자가 원하는 객체를 바운딩박스 형태로 정의한 후 카메라 움직임이 보정된 객체의 움직임 벡터를 한 GOF(Group of Pictures) 단위로 면적 기여도에 따라 누적하여 객체를 추적하고 해석하였으며 DCT 질감 정보를 이용하여 객체의 영역을 재설정 하였다. 물론 압축된 MFEG2비디오에서 얻을 수 있는 정보들은 최대 블록 단위이므로 객체의 정의도 블록단위 이상의 객체로 제한하였다. 제안된 방법은 비디오 스트림에서 직접 정보를 얻음으로써 계산속도의 향상은 물론 카메라의 움직임특성과 움직이는 객체의 추적들을 활용하여 기존의 내용기반의 검색 및 분석에도 많이 응용될 수 있다. 이러한 개발 기술들은 압축된 데이터의 검색 및 분석에 유용하게 사용되리라고 기대되며 , 특히 검색 툴이나 비디오 편집 툴 또는 교통량 감시 시스템, 혹은 무인 감시시스템 등에서 압축된 영상의 저장과 빠른 분석을 요구시 필요하리라고 기대된다.

  • PDF

Development of Augmented Reality Character System based on Markerless Tracking (마커리스 트래킹 기반 증강현실 캐릭터 시스템 개발)

  • Hyun, Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1275-1282
    • /
    • 2022
  • In this study, real-time character navigation using AR lens developed by Nreal is developed. Real-time character navigation is not possible with general marker-based AR because NPC characters must guide while moving in an unspecified space. To replace this, a markerless AR system was developed using Digital Twin technology. Existing markerless AR is operated based on hardware such as GPS, gyroscope, and magnetic sensor, so location accuracy is low and processing time in the system is long, resulting in low reliability in real-time AR environment. In order to solve this problem, using the SLAM technique to construct a space into a 3D object and to construct a markerless AR based on point location, AR can be implemented without any hardware intervention in a real-time AR environment. This real-time AR environment configuration made it possible to implement a navigation system using characters in tourist attractions such as Suncheon Bay Garden and Suncheon Drama Filming Site.

AR-Based Character Tracking Navigation System Development (AR기반 캐릭터 트래킹 네비게이션 시스템 개발)

  • Lee, SeokHwan;Lee, JungKeum;Sim, Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.325-332
    • /
    • 2022
  • In this study, real-time character navigation using AR lens developed by Nreal is developed. Real-time character navigation is not possible with general marker-based AR because NPC characters must guide while moving in an unspecified space. To replace this, a markerless AR system was developed using Digital Twin technology. Existing markerless AR is operated based on hardware such as GPS, gyroscope, and magnetic sensor, so location accuracy is low and processing time in the system is long, which results low reliability in real-time AR environment. In order to solve this problem, using the SLAM technique to construct a space into a 3D object and to construct a markerless AR based on point location, AR can be implemented without any hardware intervention in a real-time AR environment. This real-time AR environment configuration made it possible to implement a navigation system using characters in tourist attractions such as Suncheon Bay Garden and Suncheon Drama Filming Site.

Non-parametric Background Generation based on MRF Framework (MRF 프레임워크 기반 비모수적 배경 생성)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.17B no.6
    • /
    • pp.405-412
    • /
    • 2010
  • Previous background generation techniques showed bad performance in complex environments since they used only temporal contexts. To overcome this problem, in this paper, we propose a new background generation method which incorporates spatial as well as temporal contexts of the image. This enabled us to obtain 'clean' background image with no moving objects. In our proposed method, first we divided the sampled frame into m*n blocks in the video sequence and classified each block as either static or non-static. For blocks which are classified as non-static, we used MRF framework to model them in temporal and spatial contexts. MRF framework provides a convenient and consistent way of modeling context-dependent entities such as image pixels and correlated features. Experimental results show that our proposed method is more efficient than the traditional one.

Tag Trajectory Generation Scheme for RFID Tag Tracing in Ubiquitous Computing (유비쿼터스 컴퓨팅에서 RFID 태그 추적을 위한 태그 궤적 생성 기법)

  • Kim, Jong-Wan;Oh, Duk-Shin;Kim, Kee-Cheon
    • The KIPS Transactions:PartD
    • /
    • v.16D no.1
    • /
    • pp.1-10
    • /
    • 2009
  • One of major purposes of a RFID system is to track moving objects using tags attached to the objects. Because a tagged object has both location and time information expressed as the location of the reader, we can index the trajectory of the object like existing spatiotemporal objects. More efficient tracking may be possible if a spatiotemporal trajectory can be formed of a tag, but there has not been much research on tag trajectory indexes. A characteristic that distinguishes tags from existing spatiotemporal objects is that a tag creates a separate trajectory in each reader by entering and then leaving the reader. As a result, there is a trajectory interruption interval between readers, in which the tag cannot be located, and this makes it difficult to track the tag. In addition, the point tags that only enter and don't leave readers do not create trajectories, so cannot be tracked. To solve this problem, we propose a tag trajectory index called TR-tree (tag trajectory R-tree in RFID system) that can track a tag by combining separate trajectories among readers into one trajectory. The results show that TR-tree, which overcomes the trajectory interruption superior performance than TPIR-tree and R-tree.

Real-Time Human Tracker Based Location and Motion Recognition for the Ubiquitous Smart Home (유비쿼터스 스마트 홈을 위한 위치와 모션인식 기반의 실시간 휴먼 트랙커)

  • Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il;Cuong, Nguyen Quoe
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06d
    • /
    • pp.444-448
    • /
    • 2008
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2:image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

  • PDF

Real-Time Human Tracker Based on Location and Motion Recognition of User for Smart Home (스마트 홈을 위한 사용자 위치와 모션 인식 기반의 실시간 휴먼 트랙커)

  • Choi, Jong-Hwa;Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.209-216
    • /
    • 2009
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2: image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

Differences in Eye Movement during the Observing of Spiders by University Students' Cognitive Style - Heat map and Gaze plot analysis - (대학생의 인지양식에 따라 거미 관찰에서 나타나는 안구 운동의 차이 - Heat map과 Gaze plot 분석을 중심으로 -)

  • Yang, Il-Ho;Choi, Hyun-Dong;Jeong, Mi-Yeon;Lim, Sung-Man
    • Journal of Science Education
    • /
    • v.37 no.1
    • /
    • pp.142-156
    • /
    • 2013
  • The purpose of this study was to analyze observation characteristics through eye movement according to cognitive style. For this, developed observation task that can be shown the difference between wholistic cognitive style group and analytic cognitive style group, measured eye movement of university students who has different cognitive style, as given observation task. It is confirmed the difference between two cognitive style groups by analysing gathered statistics and visualization data. The findings of this study were as follows; First, Compared observation sequence and pattern by cognitive style, analytic cognitive style group is concerned with spider first and moving on surrounding environment, whereas wholistic cognitive style group had not fixed pattern as observing spider itself and surrounding area of spider alternately or looking closely on particular part at first. When observing entire feature and partial feature, wholistic cognitive style group was moving on Fixation from outstanding factor without fixed pattern, analytic cognitive style had certain directivity and repetitive investigation. Second, compared the ratio of observation, analytic cognitive style group gave a large part to spider the very thing, wholistic cognitive style group gave weight to surrounding area of spider, and analytic group shown higher concentration on observing partial feature, wholistic cognitive style group shown higher concentration on observing wholistic feature. Wholistic cognitive style group gave importance to partial features in surrounding area, and wholistic feature of spider than analytic cognitive style group, analytic cognitive style group was focus on partial features of spider than wholistic cognitive style group. Through the result of this study, there are differences of observing time, frequency, object, area, sequence, pattern and ratio from cognitive styles. It is shown the reason why each student has varied outcome, from the difference of information following their cognitive style, and the result of this study help to figure out and give direction to what observation fulfillment is suitable for each student.

  • PDF