• Title/Summary/Keyword: Image tracking system

Search Result 856, Processing Time 0.032 seconds

Study on Analyzing and Correction of Dynamic Battery Alignment Error in Naval Gun Fire Control System by using Image of Boresight Telescope (포배열카메라 영상을 활용한 함포 사격통제시스템의 동적배열오차 분석 및 보정방법)

  • Kim, Eui-Jin;Suh, Tae Il
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.16 no.6
    • /
    • pp.745-751
    • /
    • 2013
  • In naval gun firing, firing accuracy comes from the combination of each component's accuracy in CFCS (Command and Fire Control System) like tracking sensors and gun. Generally, battery alignment is done to correct the error between gun and tracking sensor by using boresight telescope on harbor and sea. But normally, the battery alignment can compensate only the static alignment error and ignore dynamic alignment error which is caused by own ship movement. There was no research on this dynamic alignment error until now. We propose a new way to analyze dynamic arrangement error by using image of boresight telescope. In case of the dynamic alignment error was due to time delay of own ship attitude information, we propose the way to compensate it.

Customer Activity Recognition System using Image Processing

  • Waqas, Maria;Nasir, Mauizah;Samdani, Adeel Hussain;Naz, Habiba;Tanveer, Maheen
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.63-66
    • /
    • 2021
  • The technological advancement in computer vision has made system like grab-and-go grocery a reality. Now all the shoppers have to do now is to walk in grab the items and go out without having to wait in the long queues. This paper presents an intelligent retail environment system that is capable of monitoring and tracking customer's activity during shopping based on their interaction with the shelf. It aims to develop a system that is low cost, easy to mount and exhibit adequate performance in real environment.

A Study on the Tracking Algorithm for BSD Detection of Smart Vehicles (스마트 자동차의 BSD 검지를 위한 추적알고리즘에 관한 연구)

  • Kim Wantae
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.2
    • /
    • pp.47-55
    • /
    • 2023
  • Recently, Sensor technologies are emerging to prevent traffic accidents and support safe driving in complex environments where human perception may be limited. The UWS is a technology that uses an ultrasonic sensor to detect objects at short distances. While it has the advantage of being simple to use, it also has the disadvantage of having a limited detection distance. The LDWS, on the other hand, is a technology that uses front image processing to detect lane departure and ensure the safety of the driving path. However, it may not be sufficient for determining the driving environment around the vehicle. To overcome these limitations, a system that utilizes FMCW radar is being used. The BSD radar system using FMCW continuously emits signals while driving, and the emitted signals bounce off nearby objects and return to the radar. The key technologies involved in designing the BSD radar system are tracking algorithms for detecting the surrounding situation of the vehicle. This paper presents a tracking algorithm for designing a BSD radar system, while explaining the principles of FMCW radar technology and signal types. Additionally, this paper presents the target tracking procedure and target filter to design an accurate tracking system and performance is verified through simulation.

A Fast Vision-based Head Tracking Method for Interactive Stereoscopic Viewing

  • Putpuek, Narongsak;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1102-1105
    • /
    • 2004
  • In this paper, the problem of a viewer's head tracking in a desktop-based interactive stereoscopic display system is considered. A fast and low-cost approach to the problem is important for such a computing environment. The system under consideration utilizes a shuttle glass for stereoscopic display. The proposed method makes use of an image taken from a single low-cost video camera. By using a simple feature extraction algorithm, the obtained points corresponding to the image of the user-worn shuttle glass are used to estimate the glass center, its local 'yaw' angle, as measured with respect to the glass center, and its global 'yaw' angle as measured with respect to the camera location. With these estimations, the stereoscopic image synthetic program utilizes those values to interactively adjust the two-view stereoscopic image pair as displayed on a computer screen. The adjustment is carried out such that the so-obtained stereoscopic picture, when viewed from a current user position, provides a close-to-real perspective and depth perception. However, because the algorithm and device used are designed for fast computation, the estimation is typically not precise enough to provide a flicker-free interactive viewing. An error concealment method is thus proposed to alleviate the problem. This concealment method should be sufficient for applications that do not require a high degree of visual realism and interaction.

  • PDF

Moving Target Tracking using Vision System for an Omni-directional Wheel Robot (전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적)

  • Kim, San;Kim, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

Lane Violation Detection System Using Feature Tracking (특징점 추적을 이용한 끼어들기 위반차량 검지 시스템)

  • Lee, Hee-Sin;Lee, Joon-Whoan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.2
    • /
    • pp.36-44
    • /
    • 2009
  • In this paper, we suggest a system of detecting a vehicle with lane violation, which can detect the vehicle with lane violation, by using the feature point tracking. The whole algorithm in the suggested system of detecting a vehicle with lane violation is composed of three stages such as feature extraction, register and tracking in feature for the tracking-targeted vehicle, and detecting a vehicle with lane violation. In the stage of feature extraction, the feature is extracted from the inputted image by sing the feature-extraction algorithm available for the real-time processing. The extracted features are again selected the racking-targeted feature. The registered feature is tracked by using NCC(normalized cross correlation). Finally, whether or not lane violation is finally detected by using information on the tracked features. As a result of experimenting the suggested system by using the acquired image in the section with a ban on intervention, the excellent performance was shown with 99.09% for positive recognition ratio and 0.9% for error ratio. The fast processing speed could be obtained in 34.48 frames per second available for real-time processing.

  • PDF

Accuracy of simulation surgery of Le Fort I osteotomy using optoelectronic tracking navigation system (광학추적항법장치를 이용한 르포씨 제1형 골절단 가상 수술의 정확성에 대한 연구)

  • Bu, Yeon-Ji;Kim, Soung-Min;Kim, Ji-Youn;Park, Jung-Min;Myoung, Hoon;Lee, Jong-Ho;Kim, Myung-Jin
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.37 no.2
    • /
    • pp.114-121
    • /
    • 2011
  • Introduction: The aim of this study was to demonstrate that the simulation surgery on rapid prototype (RP) model, which is based on the 3-dimensional computed tomography (3D CT) data taken before surgery, has the same accuracy as traditional orthograthic surgery with an intermediate splint, using an optoelectronic tracking navigation system. Materials and Methods: Simulation surgery with the same treatment plan as the Le Fort I osteotomy on the patient was done on a RP model based on the 3D CT data of 12 patients who had undergone a Le Fort I osteotomy in the department of oral and maxillofacial surgery, Seoul National University Dental Hospital. The 12 distances between 4 points on the skull, such as both infraorbital foramen and both supraorbital foramen, and 3 points on maxilla, such as the contact point of both maxillary central incisors and mesiobuccal cuspal tip of both maxillary first molars, were tracked using an optoelectronic tracking navigation system. The distances before surgery were compared to evaluate the accuracy of the RP model and the distance changes of 3D CT image after surgery were compared with those of the RP model after simulation surgery. Results: A paired t-test revealed a significant difference between the distances in the 3D CT image and RP model before surgery.(P<0.0001) On the other hand, Pearson's correlation coefficient, 0.995, revealed a significant positive correlation between the distances.(P<0.0001) There was a significant difference between the change in the distance of the 3D CT image and RP model in before and after surgery.(P<0.05) The Pearson's correlation coefficient was 0.13844, indicating positive correlation.(P<0.1) Conclusion: Theses results suggest that the simulation surgery of a Le Fort I osteotomy using an optoelectronic tracking navigation system I s relatively accurate in comparing the pre-, and post-operative 3D CT data. Furthermore, the application of an optoelectronic tracking navigation system may be a predictable and efficient method in Le Fort I orthognathic surgery.

An Advanced Visual Tracking and Stable Grasping Algorithm for a Moving Object (시각센서를 이용한 움직이는 물체의 추적 및 안정된 파지를 위한 알고리즘의 개발)

  • 차인혁;손영갑;한창수
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.6
    • /
    • pp.175-182
    • /
    • 1998
  • An advanced visual tracking and stable grasping algorithm for a moving object is proposed. The stable grasping points for a moving 2D polygonal object are obtained through the visual tracking system with the Kalman filter and image prediction technique. The accuracy and efficiency are improved more than any other prediction algorithms for the tracking of an object. In the processing of a visual tracking. the shape predictors construct the parameterized family and grasp planner find the grasping points of unknown object through the geometric properties of the parameterized family. This algorithm conducts a process of ‘stable grasping and real time tracking’.

  • PDF

A Study on Image Based Visual Tracking for SCARA Robot

  • Shin, Hang-Bong;Kim, Hong-Rae;Jung, Dong-Yean;Kim, Byeong-Chang;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1944-1948
    • /
    • 2005
  • This paper presents how it is effective to use many features for improving the speed and the accuracy of the visual servo systems. Some rank conditions which relate the image Jacobian and the control performance are derived. It is also proven that the accuracy is improved by increasing the number of features. Effectiveness of the redundant features is evaluated by the smallest singular value of the image Jacobian which is closely related to the accuracy with respect to the world coordinate system. Usefulness of the redundant features is verified by the real time experiments on a Dual-Arm Robot manipulator made in Samsung Electronic Co. Ltd

  • PDF

A study on a development of a measurement technique for diffusion of oil spill in the ocean (디지털 화상처리에 의한 해양유출기름확산 계측기법개발에 관한 연구)

  • 이중우;김기철;강신영;도덕희
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1998.10a
    • /
    • pp.211-221
    • /
    • 1998
  • A digital image processing technique which is able to get the velocity vector distribution of a surface of the spilled oil in the ocean without contacting the flow itself. This technique is based upon the PIV(Particle Imaging Velocimetry) technique and its system mainly consists of a high sensitive camera, a CCD camera, an image grabber, and a host computer in which an image processing algorithm is adopted for velocity vector acquisition. For the acquisition of the advective velocity vector of floating matters on the ocean, a new multi-frame tracking algorithm is proposed, and for the acquisition of the diffusion velocity vector distribution of the spilt oil onto the water surface, a high sensitive gray-level cross-correlation algorithm is proposed.

  • PDF