• Title/Summary/Keyword: Realtime Tracking

Search Result 79, Processing Time 0.026 seconds

Robust servo control of high speed optical disk drives (고속 광 디스크 드라이브의 강인 서보제어)

  • 임승철;정태영
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.438-444
    • /
    • 1997
  • Recently, optical disk drives are increasingly demanded to have higher speed as well as high information density, especially for applications like CD-ROM drives. To this end, improvement of their optical pick-up structure and control is recognized the very challenging issue. In this paper, the 2-D motion of the pick-up is first analytically modelled to identify the cause and effect of the troublesome cross coupling between auto-focusing and tracking directions. Subsequently, the overall system equations are derived to include the dynamics of the related components in the auto-focusing servo system. While its unmeasurable parameters being estimated by the least square error method, a simple but decent linear model can be obtained within its operating frequency range. To design the high speed and robust positional servo controller, the design specifications are detailed and H$\sub$.inf./ control method is employed based on the simple model. Using the pickup in a commercial 8 fold speed CD-ROM drive as an example, performance of the designed controller is verified by realtime experiments.

  • PDF

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

A Development of Application for Realtime Tracking Plogging based on Deep Learning Model (딥러닝 모델을 활용한 실시간 플로깅 트래킹 어플리케이션 개발)

  • In-Hye Yoo;Da-Bin Kim;Jung-Yeon Park;Jung-Been Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.434-435
    • /
    • 2023
  • 사회 환경적 운동의 하나인 플로깅(Plogging)은 조깅을 하며 길거리의 쓰레기를 줍는 행위를 소셜 네트워크 서비스(SNS) 등에 기록하는 사회 환경적 운동의 일환이다. 그러나, 활동 지역이나 쓰레기의 종류 및 양 등을 직접 입력해야 하는 불편함으로 인해 이러한 활동의 확대를 저해할 수도 있다. 본 연구는 이러한 활동 기록를 자동으로 트래킹하고 기록할 수 있는 딥러닝 기반의 플로깅 트래핑어플리케이션을 개발하였다. CNN과 YOLOv5를 사용하여 학습된 이미지 인식 모델은 높은 성능으로 쓰레기의 종류와 양을 인식하였다. 이를 통해 사용자는 더욱 편리하게 플로깅 활동을 기록할 수 있었으며, 수거한 쓰레기의 양이나 활동 거리를 활용한 리워딩 시스템으로 사용자 간의 건전한 경쟁을 유도하는데 활용할 수 있다.

Design and implementation of motion tracking based no double difference with PTZ control (PTZ 제어에 의한 이중차영상 기반의 움직임 추적 시스템의 설계 및 구현)

  • Yang Geum-Seok;Yang Seung Min
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.301-312
    • /
    • 2005
  • Three different cases should be considered for motion tracking: moving object with fixed camera, fixed object with moving camera and moving object with moving camera. Two methods are widely used for motion tracking: the optical flow method and the difference frame method. The optical new method is mainly used when either one, object or camera is fixed. This method tracks object using time-space vector which compares object position frame by frame. This method requires heavy computation, and is not suitable for real-time monitoring system such as DVR(Digital Video Recorder). The different frame method is used for moving object with fixed camera. This method tracks object by comparing the difference between background images. This method is good for real-time applications because computation is small. However, it is not applicable if the camera is moving. This thesis proposes and implements the motion tracking system using the difference frame method with PTZ(Pan-Tilt-Zoom) control. This system can be used for moving object with moving camera. Since the difference frame method is used, the system is suitable for real-time applications such as DVR.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Realtime 3D Stereoscopic Image based on Single Camera with Marker Recognition (마커 인식을 이용한 싱글카메라 기반의 실시간 3D 입체영상)

  • Hyun, Hye-Jung;Park, Jun-Hyoung;Ko, Il-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.92-101
    • /
    • 2011
  • However, in order to create a stereoscopic 3D image, independently, it is required to spend expensive manufacturing costs and to have special techniques. As 3D display devices have been generalized, there is an increasing need for implementing a stereoscopic 3D image without a burden of expensive costs. This paper proposes some methods to implement stereoscopic 3D images easily by utilizing a marker tracking technology using a single camera. In addition, the study made it possible for the resolution of an image to be adjustable dynamically. This paper will be committed to the promotion of the field of UCC (User Created Contents) using stereoscopic 3D images by attracting the active participation of general users in the field of the implementation of stereoscopic 3D images.

Implementation of a Task Level Pipelined Multicomputer RV860-PIPE for Computer Vision Applications (컴퓨터 비젼 응용을 위한 태스크 레벨 파이프라인 멀티컴퓨터 RV860-PIPE의 구현)

  • Lee, Choong-Hwan;Kim, Jun-Sung;Park, Kyu-Ho
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.38-48
    • /
    • 1996
  • We implemented and evaluated the preformance of a task level pipelined multicomputer "RV860-PIPE(Realtime Vision i860 system using PIPEline)" for computer vision applications. RV860-PIPE is a message-passing MIMD computer having ring interconnection network which is appropriate for vision processing. We designed the node computer of RV860-PIPE using a 64-bit microprocessor to have generality and high processing power for various vision algorithms. Furthermore, to reduce the communication overhead between node computers and between node computer and a frame grabber, we designed dedicated high speed communication channels between them. We showed the practical applicability of the implemented system by evaluting performances of various computer vision applications like edge detection, real-time moving object tracking, and real-time face recognition.

  • PDF

Real Time Abandoned and Removed Objects Detection System (실시간 방치 및 제거 객체 검출 시스템)

  • Jeong, Cheol-Jun;Ahn, Tae-Ki;Park, Jong-Hwa;Park, Goo-Man
    • Journal of Broadcast Engineering
    • /
    • v.16 no.3
    • /
    • pp.462-470
    • /
    • 2011
  • We proposed a realtime object tracking system that detects the abandoned or disappeared objects. Because these events are caused by human, we used the tracking based algorithm. After the background subtraction by Gaussian mixture model, the shadow removal is applied for accurate object detection. The static object is classified as either of abandoned objects or disappeared object. We assigned monitoring time to the static object to overcome a situation that it is being overlapped by other object. We obtained more accurate detection by using region growing method. We implemented our algorithm by DSP processor and obtained an excellent result throughout the experiment.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

A Fusion Algorithm considering Error Characteristics of the Multi-Sensor (다중센서 오차특성을 고려한 융합 알고리즘)

  • Hyun, Dae-Hwan;Yoon, Hee-Byung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.274-282
    • /
    • 2009
  • Various location tracking sensors; such as GPS, INS, radar, and optical equipment; are used for tracking moving targets. In order to effectively track moving targets, it is necessary to develop an effective fusion method for these heterogeneous devices. There have been studies in which the estimated values of each sensors were regarded as different models and fused together, considering the different error characteristics of the sensors for the improvement of tracking performance using heterogeneous multi-sensor. However, the rate of errors for the estimated values of other sensors has increased, in that there has been a sharp increase in sensor errors and the attempts to change the estimated sensor values for the Sensor Probability could not be applied in real time. In this study, the Sensor Probability is obtained by comparing the RMSE (Root Mean Square Error) for the difference between the updated and measured values of the Kalman filter for each sensor. The process of substituting the new combined values for the Kalman filter input values for each sensor is excluded. There are improvements in both the real-time application of estimated sensor values, and the tracking performance for the areas in which the sensor performance has rapidly decreased. The proposed algorithm adds the error characteristic of each sensor as a conditional probability value, and ensures greater accuracy by performing the track fusion with the sensors with the most reliable performance. The trajectory of a UAV is generated in an experiment and a performance analysis is conducted with other fusion algorithms.