• Title/Summary/Keyword: video data

Search Result 3,528, Processing Time 0.026 seconds

A VIDEO GEOGRAPHIC INFORMATION SYSTEM FOR SUPPORTING BI-DIRECTIONAL SEARCH FOR VIDEO DATA AND GEOGRAPHIC INFORMATION

  • Yoo, Jea-Jun;Joo, In-Hak;Park, Jong-Huyn;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.151-156
    • /
    • 2002
  • Recently, as the geographic information system (GIS) which searches, manages geographic information is used more widely, there is more requests for some systems which can search and display more actual and realistic information. As a response to these requests, the video geographic information system which connects video data obtained by using cameras and geographic information as it is by displaying the obtained video data is being more popular. However, because most existing video geographic information systems consider video data as an attribute of geographic information or use simple one-way links from geographic information to video data to connect video data with geographic information, they support only displaying video data through searching geographic information. In this paper, we design and implement a video geographic information system which connects video data with geographic information and supports hi-directional search; searching geographic information through searching video data and searching video data through searching geographic information. To do this, we 1) propose an ER data model to represent connection information related to video data, geographic information, 2) propose a process to extract and to construct connection information from video data and geographic information, 3) show a component based system architecture to organize the video geographic information system.

  • PDF

Design and Implementation of the Video Query Processing Engine for Content-Based Query Processing (내용기반 질의 처리를 위한 동영상 질의 처리기의 설계 및 구현)

  • Jo, Eun-Hui;Kim, Yong-Geol;Lee, Hun-Sun;Jeong, Yeong-Eun;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.603-614
    • /
    • 1999
  • As multimedia application services on high-speed information network have been rapidly developed, the need for the video information management system that provides an efficient way for users to retrieve video data is growing. In this paper, we propose a video data model that integrates free annotations, image features, and spatial-temporal features for video purpose of improving content-based retrieval of video data. The proposed video data model can act as a generic video data model for multimedia applications, and support free annotations, image features, spatial-temporal features, and structure information of video data within the same framework. We also propose the video query language for efficiently providing query specification to access video clips in the video data. It can formalize various kinds of queries based on the video contents. Finally we design and implement the query processing engine for efficient video data retrieval on the proposed metadata model and the proposed video query language.

  • PDF

Design and Implementation of Video GIS for Web Applications

  • Yoo, Jae-Jun;Choi, Kyoung-Ho;Jang, Byoung-Tae;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.396-398
    • /
    • 2003
  • Recently as users' requests for geographic information systems are being more various, several new functions are being added to existing geographic information systems. One of them is linkage of spatial geographic data and video / image data to offer more realistic information of geographic objects to users. Geographic information systems implementing this function are called video geographic information system. In this paper, we design and implement a video geographic information system for providing map data, video data and link information of them to web applications. We 1) design system architecture of a video geographic information system, 2) analyze some processes to construct link information of map data and video data, 3) design database schema to store map data, video data, and link information, and 4) design some XML/GML schema used to query and retrieve these data for web applications.

  • PDF

(Dynamic Video Object Data Model(DIVID) (동적 비디오 객체 데이터 모델(DVID))

  • Song, Yong-Jun;Kim, Hyeong-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.9
    • /
    • pp.1052-1060
    • /
    • 1999
  • 이제까지 비디오 데이타베이스를 모델링하기 위한 많은 연구들이 수행되었지만 그 모든 모델들에서 다루는 비디오 데이타는 사용자의 개입이 없을 때 항상 미리 정의된 순서로 보여진다는 점에서 정적 데이타 모델로 간주될 수 있다. 주문형 뉴스 서비스, 주문형 비디오 서비스, 디지털 도서관, 인터넷 쇼핑 등과 같이 최신 비디오 정보 서비스를 제공하는 비디오 데이타베이스 응용들에서는 빈번한 비디오 편집이 요구되는데 실시간 처리가 바람직하다. 이를 위해서 기존의 비디오 데이타 내용이 변경되거나 새로운 비디오 데이타가 생성되어야 하지만 이제까지의 비디오 데이타 모델에서는 이러한 비디오 편집 작업이 일일이 수작업으로 수행되어야만 했다. 본 논문에서는 비디오 편집에 드는 노력을 줄이기 위해서 객체지향 데이타 모델에 기반하여 DVID(Dynamic Video Object Data Model)라는 동적 비디오 객체 데이타 모델을 제안한다. DVID는 기존의 정적 비디오 객체뿐만 아니라 사용자의 개입없이도 비디오의 내용을 비디오 데이타베이스로부터 동적으로 결정하여 보여주는 동적 비디오 객체도 함께 제공한다.Abstract A lot of research has been done on modeling video databases, but all of them can be considered as the static video data model from the viewpoint that all video data on those models are always presented according to the predefined sequences if there is no user interaction. For some video database applications which provides with up-to-date video information services such as news-on-demand, video-on-demand, digital library, internet shopping, etc., video editing is requested frequently, preferably in real time. To do this, the contents of the existing video data should be changed or new video data should be created, but on the traditional video data models such video editing works should be done manually. In order to save trouble in video editing work, this paper proposes the dynamic video object data model named DVID based on object oriented data model. DVID allows not only the static video object but also the dynamic video object whose contents are dynamically determined from video databases in real time even without user interaction.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Layered Video Content Modeling and Browsing (계층화된 비디오 내용 모델링 및 브라우징)

  • Bok, Kyoung-Soo;Lee, Nak-Gyu;Heo, Jeong-Pil;Yoo, Jae-Soo;Cho, Ki-Hyung;Lee, Byoung-Yup
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1115-1126
    • /
    • 2003
  • In this paper, we propose modeling method for video data that represents structural and semantic contents of video data efficiently. Also, a browsing method that helps users easily understand and play the contents of video data is presented. The proposed modeling scheme consists of three layers such as raw data layer, content layer and key frame layer The content layer represents logical hierarchy and semantic contents of video data. We implement two kinds of browsers for playing video data and providing video contents. The playing browser plays video data and Presents the information of currently playing shot. The content browser allows users to browse raw data, structural information and semantic contents of video data.

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

Design and Implementation of A Video Information Management System for Digital Libraries (디지털 도서관을 위한 동영상 정보 관리 시스템의 설계 및 구현)

  • 김현주;권재길;정재희;김인홍;강현석;배종민
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.131-141
    • /
    • 1998
  • Video data occurred in multimedia documents consist of a large scale of irregular data including audio-visual, spatial-temporal, and semantic information. In general, it is difficult to grasp the exact meaning of such a video information because video data apparently consist of unmeaningful symbols and numerics. In order to relieve these difficulties, it is necessary to develop an integrated manager for complex structures of video data and provide users of video digital libraries with easy, systematic access mechanisms to video informations. This paper proposes a generic integrated video information model(GIVIM) based on an extended Dublin Core metadata system to effectively store and retrieve video documents in digital libraries. The GIVIM is an integrated mo이 of a video metadata model(VMN) and a video architecture information model(VAIM). We also present design and implementation results of a video document management system(VDMS) based on the GIVIM.

  • PDF

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.

Caption Data Transmission Method for HDTV Picture Quality Improvement (DTV 화질향상을 위한 자막데이터 전송방법)

  • Han, Chan-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.10
    • /
    • pp.1628-1636
    • /
    • 2017
  • Such as closed caption, ancillary data, electronic program guide(EPG), data broadcasting, and etc, increased data for service convenience cause to degrade video quality of high definition contents. This article propose a method to transfer the closed caption data of video contents without video quality degradation. Video quality degradation does not cause in video compression by the block image insertion of caption data in DTV essential hidden area. Additionally the proposed methods have advantage to synchronize video, audio, and caption from preinserted script without time delay.