• Title/Summary/Keyword: 비전기반 트래킹

Search Result 5, Processing Time 0.025 seconds

Drawing Tool with Vision-Based Tracking System (저가의 비전 기반 트래킹 시스템을 이용한 그림 툴)

  • Lee, Ju-Young;Hu, Haejung;Park, Mijung;Lee, Sunkyu;Seo, Minyoung;Yoo, Juhee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.295-296
    • /
    • 2012
  • 그림 툴은 실시간 비디오 영상 스트림과 트래킹 시스템을 통해 사용자의 손가락 움직임의 입력을 받아 가상의 오브젝트들을 그려서 보여주는 툴이다. 핵심개발기술은 병렬처리언어인 CUDA사용하여 개발된 저가의 비전 기반 트래킹 시스템이다. 저가의 트래킹 시스템과 그림툴의 설계, 구현, 앞으로의 발전 방향에 대해 설명한다.

  • PDF

Improved Tracking System and Realistic Drawing for Real-Time Water-Based Sign Pen (향상된 트래킹 시스템과 실시간 수성 사인펜을 위한 사실적 드로잉)

  • Hur, Hyejung;Lee, Ju-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.125-132
    • /
    • 2014
  • In this paper, we present marker-less fingertip and brush tracking system with inexpensive web camera. Parallel computation using CUDA is applied to the tracking system. This tracking system can run on inexpensive environment such as a laptop or a desktop and support for real-time application. We also present realistic water-based sign pen drawing model and implementation. The realistic drawing application with our inexpensive real-time fingertip and brush tracking system shows us the art class of the future. The realistic drawing application, along with our inexpensive real-time fingertip and brush tracking system, would be utilized in test-bed for the future high-technology education environment.

Design of a Background Image Based Multi-Degree-of-Freedom Pointing Device (배경영상 기반 다자유도 포인팅 디바이스의 설계)

  • Jang, Suk-Yoon;Kho, Jae-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.133-141
    • /
    • 2008
  • As interactive multimedia have come into wide use, user interfaces such as remote controllers or classical computer mice have several limitations that cause inconvenience. We propose a vision-based pointing device to resolve this problem. We analyzed the moving image from the camera which is embedded in the pointing device and estimate the movement of the device. The pose of the cursor can be determined from this result. To process in the real time, we used the low resolution of $288{\times}208$ pixel camera and comer points of the screen were tracked using local optical flow method. The distance from screen and device was calculated from the size of screen in the image. The proposed device has simple configurations, low cost, easy use, and intuitive handhold operation like traditional mice. Moreover it shows reliable performance even in the dark condition.

A Study on the Application Model of AI Convergence Services Using CCTV Video for the Advancement of Retail Marketing (리테일 마케팅 고도화를 위한 CCTV 영상 데이터 기반의 AI 융합 응용 서비스 활용 모델 연구)

  • Kim, Jong-Yul;Kim, Hyuk-Jung
    • Journal of Digital Convergence
    • /
    • v.19 no.5
    • /
    • pp.197-205
    • /
    • 2021
  • Recently, the retail industry has been increasingly demanding information technology convergence and utilization to respond to various external environmental threats such as COVID-19 and to be competitive using AI technologies, but there is a very lack of research and application services. This study is a CCTV video data-driven AI application case study, using CCTV image data collection in retail space, object detection and tracking AI model, time series database to store real-time tracked objects and tracking data, heatmap to analyze congestion and interest in retail space, social access zone.We present the orientation and verify its usability in the direction designed through practical implementation.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.