• Title/Summary/Keyword: Video Annotation

Search Result 64, Processing Time 0.024 seconds

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

Augmented Reality Annotation for Real-Time Collaboration System

  • Cao, Dongxing;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.483-489
    • /
    • 2020
  • Advancements in mobile phone hardware and network connectivity made communication becoming more and more convenient. Compared to pictures or texts, people prefer to share videos to convey information. For intentions clearer, the way to annotating comments directly on the video are quite important issues. Recently there have been many attempts to make annotations on video. These previous works have many limitations that do not support user-defined handwritten annotations or annotating on local video. In this sense, we propose an augmented reality based real-time video annotation system which allowed users to make any annotations directly on the video freely. The contribution of this work is the development of a real-time video annotation system based on recent augmented reality platforms that not only enables annotating drawing geometry shape on video in real-time but also drastically reduces the production costs. For practical use, we proposed a real-time collaboration system based on the proposed annotation method. Experimental results show that the proposed annotation method meets the requirements of real-time, accuracy and robustness of the collaboration system.

Development of Video Data-base and a Video Annotation Tool for Evaluation of Smart CCTV System (지능형CCTV시스템 성능평가를 위한 영상DB와 영상 주석도구 개발)

  • Park, Jang-Sik;Yi, Seung-Jai
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.7
    • /
    • pp.739-745
    • /
    • 2014
  • In this paper, an evaluation of intelligent CCTV system is proposed with recording and implementation video and video DB. Videos for evaluation are recorded by dividing far, mid and near zone. Video DB has video recording information, detection area, and ground truth in XML format. A video annotation tool is proposed to make ground truth effectively in this paper. A video annotation tool writes ground truths of videos and includes evaluation comparing system alarms with ground truths.

Designing Video-based Teacher Professional Development: Teachers' Meaning Making with a Video Annotation Tool

  • SO, Hyo-Jeong;LIM, Weiying;XIONG, Yao
    • Educational Technology International
    • /
    • v.17 no.1
    • /
    • pp.87-116
    • /
    • 2016
  • In this research, we designed a teacher professional development (PD) program where a small group of mathematics teachers could share, reflect on, and discuss their pedagogical knowledge and practices of ICT-integrated lessons, using a video annotation tool called DIVER. The main purposes of this paper are both micro and macro: to examine how the teachers were engaged in the meaning-making process in a video-based PD (micro); and to derive implications about how to design effective video-based teacher PD programs toward a teacher community of practices (macro). To examine teachers' meaning-making in the PD sessions, discourse data from a series of 10 meetings was segmented into idea units and coded to identify discourse patterns, focusing on (a) participation levels, (b) conversation topics, and (c) conversation depth. Regarding the affordance of DIVER, discourse patterns of two meetings, before and after individual annotation with DIVER were compared through qualitative vignette analysis. Overall, we found that the teacher discourse shifted the focus from surface features to deeper pedagogical issues as the PD sessions progressed. In particular, the annotation function in DIVER afforded the teachers to exercise descriptive analyses of video clips in a flexible manner, thereby helping them cognitively prepared to take interpretative and evaluative stances in face-to-face discussions with colleagues. In conclusion, deriving from our research experiences, we discuss the possibilities and challenges of designing video-based teacher PD in a school context.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Design And Implementation of Video Retrieval System for Using Semantic-based Annotation (의미 기반 주석을 이용한 비디오 검색 시스템의 설계 및 구현)

  • 홍수열
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.3
    • /
    • pp.99-105
    • /
    • 2000
  • Video has become an important element of multimedia computing and communication environments, with applications as varied as broadcasting, education, publishing, and military intelligence. The necessity of the efficient methods for multimedia data retrieval is increasing more and more on account of various large scale multimedia applications. According1y, the retrieval and representation of video data becomes one of the main research issues in video database. As for the representation of the video data there have been mainly two approaches: (1) content-based video retrieval, and (2) annotation-based video retrieval This paper designs and implements a video retrieval system for using semantic-based annotation.

  • PDF

Video Data Retrieval System using Annotation and Feture Information (주석정보와 특징정보를 애용한 비디오데이터 검색 시스템)

  • Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.6
    • /
    • pp.1129-1133
    • /
    • 2006
  • In this thesis, we propose a semantics-based video retrieval system which supports semantics-retrieval for various users of massive video data. Proposed system automatically processes the extraction of contents information which video data has and retrieval process using agent which integrate annotation-based retrieval and feature-based retrieval. From experiment, the designed and implemented system shows increase of recall rate and precision rate for video data scene retrieval in performance assessment.

  • PDF

WalkieTagging : Efficient Speech-Based Video Annotation Method for Smart Devices (워키태깅 : 스마트폰 환경에서 음성기반의 효과적인 영상 콘텐츠 어노테이션 방법에 관한 연구)

  • Park, Joon Young;Lee, Soobin;Kang, Dongyeop;Seok, YoungTae
    • Journal of Information Technology Services
    • /
    • v.12 no.1
    • /
    • pp.271-287
    • /
    • 2013
  • The rapid growth and dissemination of touch-based mobile devices such as smart phones and tablet PCs, gives numerous benefits to people using a variety of multimedia contents. Due to its portability, it enables users to watch a soccer game, search video from YouTube, and sometimes tag on contents on the road. However, the limited screen size of mobile devices and touch-based character input methods based on this, are still major problems of searching and tagging multimedia contents. In this paper, we propose WalkieTagging, which provides a much more intuitive way than that of previous one. Just like any other previous video tagging services, WalkieTagging, as a voice-based annotation service, supports inserting detailed annotation data including start time, duration, tags, with little effort of users. To evaluate our methods, we developed the Android-based WalkieTagging application and performed user study via a two-week. Through our experiments by a total of 46 people, we observed that experiment participator think our system is more convenient and useful than that of touch-based one. Consequently, we found out that voice-based annotation methods can provide users with much convenience and satisfaction than that of touch-based methods in the mobile environments.

A Semantics-based Video Retrieval System using Annotation and Feature (주석 및 특징을 이용한 의미기반 비디오 검색 시스템)

  • 이종희
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantic-based retrieval method can be available for various query of users. Currently existent contents-based video retrieval systems search by single method such as annotation-based or feature-based retrieval, and show low search efficiency md requires many efforts of system administrator or annotator because of imperfect automatic processing. In this paper, we propose semantics-based video retrieval system which support semantic retrieval of various users by feature-based retrieval and annotation-based retrieval of massive video data. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user become query image and searches the most similar key frame through feature based retrieval method and optimized comparison area extracting that propose. Therefore, we propose the system that can heighten retrieval efficiency of video data through semantics-based retrieval.