• Title/Summary/Keyword: 멀티 비전시스템

Search Result 62, Processing Time 0.022 seconds

Qos Management System of BcN for Convergence Services of Broadcasting and Communication (방송통신 컨버전스 서비스를 위한 BcN의 Qos 관리시스템)

  • Song, Myung-Won;Choi, In-Young;Jung, Soon-Key
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.121-131
    • /
    • 2009
  • BcN provides a wide variety of high-quality multimedia services such as broadcasting and communication convergence services. But the quality degeneration is observed in BcN when we use broadcasting and communication convergence service via more than one network of different internet service providers. In this paper, a QoS management system which is able to measure and maintain objectively the quality-related information in overall networks is proposed. The proposed QoS management system is tested on the pilot networks of BcN consortiums by measuring the quality of voice and video experienced by the actual users of the commercial video phone services. The result of the experiment shows that it is possible to figure out service qualify between a user and a service provider by analyzing the information from agents. The per-service traffic information collected by probes is proved to be useful to pinpoint the party responsible for the loss of the service qualify in case of the services including different service providers. As the result of the experiment, it is shown that the proposed QoS management system would play a key role of resolving the quality dispute, which is one of the important issues of QoS-guaranteed BcN.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.