• Title/Summary/Keyword: Information Video

Search Result 6,857, Processing Time 0.041 seconds

The Implementation of Information Providing Method System for Indoor Area by using the Immersive Media's Video Information (실감미디어 동영상정보를 이용한 실내 공간 정보 제공 시스템 구현)

  • Lee, Sangyoon;Ahn, Heuihak
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.3
    • /
    • pp.157-166
    • /
    • 2016
  • This paper presents the interior space information using 6D-360 degree immersive media video information. And we implement the augmented reality, which includes a variety of information such as position information, movement information of the specific location in the interior space GPS signal does not reach the position information. Augmented reality containing the 6D-360 degree immersive media video information provides the position information and the three dimensional space image information to identify the exact location of a user in an interior space of a moving object as well as a fixed interior space. This paper constitutes a three dimensional image database based on the 6D-360 degree immersive media video information and provides augmented reality service. Therefore, to map the various information to 6D-360 degree immersive media video information, the user can check the plant in the same environment as the actual. It suggests the augmented reality service for the emergency escape and repair to the passengers and employees.

Enhanced Augmented Reality with Realistic Shadows of Graphic Generated Objects (비디오 영상에 가상물체의 그림자 삽입을 통한 향상된 AR 구현)

  • 김태원;홍기상
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.619-622
    • /
    • 2000
  • In this paper, we propose a method for generating graphic objects having realistic shadows inserted into video sequence for the enhanced augmented reality. Our purpose is to extend the work of [1], which is applicable to the case of a static camera, to video sequence. However, in case of video, there are a few challenging problems, including the camera calibration problem over video sequence, false shadows occurring when the video camera moves and so on. We solve these problems using the convenient calibration technique of [2] and the information from video sequence . We present the experimental results on real video sequences.

  • PDF

A New Objective Video Quality Metric for Stereoscopic Video

  • Zheng, Yan;Seo, Jungdong;Sohn, Kwanghoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.355-358
    • /
    • 2012
  • Although quality metrics for 2D video quality assessment have been proposed, the quality models on stereoscopic video have not been widely studied. In this paper, a new objective video quality metric for s tereoscopic video is proposed. The proposed algorithm consider three factors to evaluate stereoscopic video quality: blocking artifact, blurring artifact, and the difference between left and right view of stereoscopic vide o. The results show that the proposed algorithm has a higher correlation with DMOS than the others.

Design and Implementation of MPEG-2 Compressed Video Information Management System (MPEG-2 압축 동영상 정보 관리 시스템의 설계 및 구현)

  • Heo, Jin-Yong;Kim, In-Hong;Bae, Jong-Min;Kang, Hyun-Syug
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1431-1440
    • /
    • 1998
  • Video data are retrieved and stored in various compressed forms according to their characteristics, In this paper, we present a generic data model that captures the structure of a video document and that provides a means for indexing a video stream, Using this model, we design and implement CVIMS (the MPEG-2 Compressed Video Information Management System) to store and retrieve video documents, CVIMS extracts I-frames from MPEG-2 files, selects key-frames from the I -frames, and stores in database the index information such as thumbnails, captions, and picture descriptors of the key-frames, And also, CVIMS retrieves MPEG- 2 video data using the thumbnails of key-frames and v31ious labels of queries.

  • PDF

Cross-layer Video Streaming Mechanism over Cognitive Radio Ad hoc Information Centric Networks

  • Han, Longzhe;Nguyen, Dinh Han;Kang, Seung-Seok;In, Hoh Peter
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.3775-3788
    • /
    • 2014
  • With the increasing number of the wireless and mobile networks, the way that people use the Internet has changed substantively. Wireless multimedia services, such as wireless video streaming, mobile video game, and mobile voice over IP, will become the main applications of the future wireless Internet. To accommodate the growing volume of wireless data traffic and multimedia services, cognitive radio (CR) and Information-Centric Network (ICN) have been proposed to maximize the utilization of wireless spectrum and improve the network performance. Although CR and ICN have high potential significance for the future wireless Internet, few studies have been conducted on collaborative operations of CR and ICN. Due to the lack of infrastructure support in multi-hop ad hoc CR networks, the problem is more challenging for video streaming services. In this paper, we propose a Cross-layer Video Streaming Mechanism (CLISM) for Cognitive Radio Ad Hoc Information Centric Networks (CRAH-ICNs). The CLISM included two distributed schemes which are designed for the forwarding nodes and receiving nodes in CRAH-ICNs. With the cross-layer approach, the CLISM is able to self-adapt the variation of the link conditions without the central network controller. Experimental results demonstrate that the proposed CLISM efficiently adjust video transmission policy under various network conditions.

Deep Learning based Loss Recovery Mechanism for Video Streaming over Mobile Information-Centric Network

  • Han, Longzhe;Maksymyuk, Taras;Bao, Xuecai;Zhao, Jia;Liu, Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4572-4586
    • /
    • 2019
  • Mobile Edge Computing (MEC) and Information-Centric Networking (ICN) are essential network architectures for the future Internet. The advantages of MEC and ICN such as computation and storage capabilities at the edge of the network, in-network caching and named-data communication paradigm can greatly improve the quality of video streaming applications. However, the packet loss in wireless network environments still affects the video streaming performance and the existing loss recovery approaches in ICN does not exploit the capabilities of MEC. This paper proposes a Deep Learning based Loss Recovery Mechanism (DL-LRM) for video streaming over MEC based ICN. Different with existing approaches, the Forward Error Correction (FEC) packets are generated at the edge of the network, which dramatically reduces the workload of core network and backhaul. By monitoring network states, our proposed DL-LRM controls the FEC request rate by deep reinforcement learning algorithm. Considering the characteristics of video streaming and MEC, in this paper we develop content caching detection and fast retransmission algorithm to effectively utilize resources of MEC. Experimental results demonstrate that the DL-LRM is able to adaptively adjust and control the FEC request rate and achieve better video quality than the existing approaches.

Video Captioning with Visual and Semantic Features

  • Lee, Sujin;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1318-1330
    • /
    • 2018
  • Video captioning refers to the process of extracting features from a video and generating video captions using the extracted features. This paper introduces a deep neural network model and its learning method for effective video captioning. In this study, visual features as well as semantic features, which effectively express the video, are also used. The visual features of the video are extracted using convolutional neural networks, such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction network proposed in this paper. Further, an attention-based caption generation network is proposed for effective generation of video captions using the extracted features. The performance and effectiveness of the proposed model is verified through various experiments using two large-scale video benchmarks such as the Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).

An Interactive Cooking Video Query Service System with Linked Data (링크드 데이터를 이용한 인터랙티브 요리 비디오 질의 서비스 시스템)

  • Park, Woo-Ri;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.59-76
    • /
    • 2014
  • The revolution of smart media such as smart phone, smart TV and tablets has brought easiness for people to get contents and related information anywhere and anytime. The characteristics of the smart media have changed user behavior for watching the contents from passive attitude into active one. Video is a kind of multimedia resources and widely used to provide information effectively. People not only watch video contents, but also search for related information to specific objects appeared in the contents. However, people have to use extra views or devices to find the information because the existing video contents provide no information through the contents. Therefore, the interaction between user and media is becoming a major concern. The demand for direct interaction and instant information is much increasing. Digital media environment is no longer expected to serve as a one-way information service, which requires user to search manually on the internet finding information they need. To solve the current inconvenience, an interactive service is needed to provide the information exchange function between people and video contents, or between people themselves. Recently, many researchers have recognized the importance of the requirements for interactive services, but only few services provide interactive video within restricted functionality. Only cooking domain is chosen for an interactive cooking video query service in this research. Cooking is receiving lots of people attention continuously. By using smart media devices, user can easily watch a cooking video. One-way information nature of cooking video does not allow to interactively getting more information about the certain contents, although due to the characteristics of videos, cooking videos provide various information such as cooking scenes and explanation for each recipe step. Cooking video indeed attracts academic researches to study and solve several problems related to cooking. However, just few studies focused on interactive services in cooking video and they still not sufficient to provide the interaction with users. In this paper, an interactive cooking video query service system with linked data to provide the interaction functionalities to users. A linked recipe schema is used to handle the linked data. The linked data approach is applied to construct queries in systematic manner when user interacts with cooking videos. We add some classes, data properties, and relations to the linked recipe schema because the current version of the schema is not enough to serve user interaction. A web crawler extracts recipe information from allrecipes.com. All extracted recipe information is transformed into ontology instances by using developed instance generator. To provide a query function, hundreds of questions in cooking video web sites such as BBC food, Foodista, Fine cooking are investigated and analyzed. After the analysis of the investigated questions, we summary the questions into four categories by question generalization. For the question generalization, the questions are clustered in eleven questions. The proposed system provides an environment associating UI (User Interface) and UX (User Experience) that allow user to watch cooking videos while obtaining the necessary additional information using extra information layer. User can use the proposed interactive cooking video system at both PC and mobile environments because responsive web design is applied for the proposed system. In addition, the proposed system enables the interaction between user and video in various smart media devices by employing linked data to provide information matching with the current context. Two methods are used to evaluate the proposed system. First, through a questionnaire-based method, computer system usability is measured by comparing the proposed system with the existing web site. Second, the answer accuracy for user interaction is measured to inspect to-be-offered information. The experimental results show that the proposed system receives a favorable evaluation and provides accurate answers for user interaction.

Semi-Dynamic Digital Video Adaptation System for Mobile Environment (모바일 환경을 위한 준-동적 디지털 비디오 어댑테이션 시스템)

  • 추진호;이상민;낭종호
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1320-1331
    • /
    • 2004
  • A video adaptation system translates the source video stream into appropriate video stream while satisfying the network and client constraints and maximizing the video quality as much as possible. This paper proposes a semi-dynamic video adaptation scheme, in which several intermediate video streams and the information for the measuring of video quality are generated statically. The intermediate video streams are generated by reducing the resolution of the video stream by a power of two several times, and they are stored as the intermediate video streams on the video server. The statically generated information for the input video stream consists of the degrees of smoothness for each frame rate and the degree of frame definition for each pixel bit rate. It helps to dynamically generate the target video stream according to the client's QoS at run-time as quickly as possible. Experimental result shows that the proposed adaptation scheme can generate the target video stream about thirty times faster while keeping the quality degradation as less than 2% comparing to the target video stream that is totally dynamically generated, although the extra storages for the intermediate video streams are required.

Adaptive Video Streaming System Using Receiver Caching (수신단 캐싱을 활용한 적응형 비디오 스트리밍 시스템)

  • Kim, Yu-Sin;Jeong, Moo-Woong;Shin, Jae Min;Ryu, Jong Yeol;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.7
    • /
    • pp.837-844
    • /
    • 2019
  • As the demand for video streaming has been rapidly increasing recently, video streaming schemes for increasing the efficiency of radio resource has attracted a lot of attention. In this paper, we propose an adaptive video streaming scheme to enhance the efficiency of video streaming by using receivers' caching capability. The proposed streaming scheme can transmit video data on a broadcast basis even when two clients request different video data, only if specific conditions satisfied, while existing schemes can only transmit video data on a broadcast basis only when two clients request the same video data. In this paper, we mathematically derive the average transmission time of the proposed scheme and the approximation of the average transmission time. The accuracy of the mathematical analysis is verified by simulations. Mathematical analysis and simulation results show that the proposed scheme can significantly reduce the average transmission time, compared to the existing scheme.