• Title/Summary/Keyword: Video processing

Search Result 2,156, Processing Time 0.027 seconds

A Study on COP-Transformation Based Metadata Security Scheme for Privacy Protection in Intelligent Video Surveillance (지능형 영상 감시 환경에서의 개인정보보호를 위한 COP-변환 기반 메타데이터 보안 기법 연구)

  • Lee, Donghyeok;Park, Namje
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.417-428
    • /
    • 2018
  • The intelligent video surveillance environment is a system that extracts various information about a video object and enables automated processing through the analysis of video data collected in CCTV. However, since the privacy exposure problem may occur in the process of intelligent video surveillance, it is necessary to take a security measure. Especially, video metadata has high vulnerability because it can include various personal information analyzed based on big data. In this paper, we propose a COP-Transformation scheme to protect video metadata. The proposed scheme is advantageous in that it greatly enhances the security and efficiency in processing the video metadata.

A Spatial Data Construction System with Video GIS (비디오 GIS를 이용한 공간데이터 구축 시스템)

  • Joo, In-Hak;Yoo, Jae-Jun;Nam, Kwang-Woo;Kim, Min-Soo;Lee, Jong-Hun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11c
    • /
    • pp.1903-1906
    • /
    • 2002
  • Video GIS is a spatial information system where video is used and integrated with map or other media such as 3D graphics, image, video, and satellite imagery. The information expressed by video, in nature, can provide realistic information. The connection of map and image of actual geographic object brings realistic visualization, which overcomes the limitation of conventional map-based GIS. In the suggested video GIS, location information is contained in video data and thereby enables two-sided searching, browsing, and analyses. In this paper, we suggest video GIS that integrates and manages video and map, and that constructs spatial information. We also develop a prototype system of video GIS in the field of roadside facility management, and show the results.

  • PDF

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Real-Time Scheduling Facility for Video-On-Demand Service (주문형 비디오 서비스를 위한 실시간 스케쥴링 기능)

  • Sohn, Jong-Moon;Kim, Gil-Yong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2581-2595
    • /
    • 1997
  • In this paper, the real-time facility of the operating system for a VOD(Video On Demand) server have been analyzed and implemented. The requirements of the real-time scheduling have been gathered by analyzing the model of the video-data-transfer-path. Particularly, the influence of the bottleneck subsystem have been analyzed. Thus, we have implemented the real-time scheduler and primitives which is proper for processing the digital video. In performance measurements, the degree of the guarantee of the real-time scheduler have been experimented. The measured data show that the most time constraints of the process is satisfied. But, the network protocol processing by the interrupt is a major obstacle of the real-time scheduling. We also have compared the difference between the real-time scheduler and the non-real-time scheduler by measuring the inter-execution time. According to the measured results, the real-time scheduler should be used for efficient video service because the processor time allocated to the process can't be estimated when the non-real-time scheduler is used.

  • PDF

Fast Extraction of Objects of Interest from Images with Low Depth of Field

  • Kim, Chang-Ick;Park, Jung-Woo;Lee, Jae-Ho;Hwang, Jenq-Neng
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.353-362
    • /
    • 2007
  • In this paper, we propose a novel unsupervised video object extraction algorithm for individual images or image sequences with low depth of field (DOF). Low DOF is a popular photographic technique which enables the representation of the photographer's intention by giving a clear focus only on an object of interest (OOI). We first describe a fast and efficient scheme for extracting OOIs from individual low-DOF images and then extend it to deal with image sequences with low DOF in the next part. The basic algorithm unfolds into three modules. In the first module, a higher-order statistics map, which represents the spatial distribution of the high-frequency components, is obtained from an input low-DOF image. The second module locates the block-based OOI for further processing. Using the block-based OOI, the final OOI is obtained with pixel-level accuracy. We also present an algorithm to extend the extraction scheme to image sequences with low DOF. The proposed system does not require any user assistance to determine the initial OOI. This is possible due to the use of low-DOF images. The experimental results indicate that the proposed algorithm can serve as an effective tool for applications, such as 2D to 3D and photo-realistic video scene generation.

  • PDF

Design and Implementation of UCC Multimedia Service Systems (UCC 멀티미디어 서비스 시스템 설계 및 구현)

  • Bok, kyoung-soo;Yeo, myung-ho;Lee, mi-sook;Lee, nak-gyu;Yoo, kwan-hee;Yoo, jae-soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.178-182
    • /
    • 2007
  • In this paper, we design and implement the UCC services prototype system for image and video. The proposed system consists of the two components such as the multimedia processing subsystem and the metadata management subsystem, and provides the API to UCC service developers. The multimedia processing subsystem supports the media management and editing of image and video, and the streaming services of video. The metadata management subsystem supports the metadata management and retrieval of image and video, and the reply management and script processing of UCC.

  • PDF

A Beamforming-Based Video-Zoom Driven Audio-Zoom Algorithm for Portable Digital Imaging Devices

  • Park, Nam In;Kim, Seon Man;Kim, Hong Kook;Kim, Myeong Bo;Kim, Sang Ryong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.1
    • /
    • pp.11-19
    • /
    • 2013
  • A video-zoom driven audio-zoom algorithm is proposed to provide audio zooming effects according to the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone array in conjunction with a soft masking process that uses the phase differences between microphones. The audio-zoom processed signal is obtained by multiplying the audio gain derived from the video-zoom level by the masked signal. The proposed algorithm is then implemented on a portable digital imaging device with a clock speed of 600 MHz after different levels of optimization, such as algorithmic level, C-code and memory optimization. As a result, the processing time of the proposed audio-zoom algorithm occupies 14.6% or less of the clock speed of the device. The performance evaluation conducted in a semi-anechoic chamber shows that the signals from the front direction can be amplified by approximately 10 dB compared to the other directions.

  • PDF

Programmable Multimedia Platform for Video Processing of UHD TV (UHD TV 영상신호처리를 위한 프로그래머블 멀티미디어 플랫폼)

  • Kim, Jaehyun;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.774-777
    • /
    • 2015
  • This paper introduces the world's first programmable video-processing platform for the enhancement of the video quality of the 8K(7680x4320) UHD(Ultra High Definition) TV operating up to 60 frames per second. In order to support required computing capacity and memory bandwidth, the proposed platform implemented several key features such as symmetric multi-cluster architecture for parallel data processing, a ring-data path between the clusters for data pipelining and hardware accelerators for computing filter operations. The proposed platform based on RP(Reconfigurable Processor) processes video quality enhancement algorithms and handles effectively new UHD broadcasting standards and display panels.

Joint Spatial-Temporal Quality Improvement Scheme for H.264 Low Bit Rate Video Coding via Adaptive Frameskip

  • Cui, Ziguan;Gan, Zongliang;Zhu, Xiuchang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.1
    • /
    • pp.426-445
    • /
    • 2012
  • Conventional rate control (RC) schemes for H.264 video coding usually regulate output bit rate to match channel bandwidth by adjusting quantization parameter (QP) at fixed full frame rate, and the passive frame skipping to avoid buffer overflow usually occurs when scene changes or high motions exist in video sequences especially at low bit rate, which degrades spatial-temporal quality and causes jerky effect. In this paper, an active content adaptive frame skipping scheme is proposed instead of passive methods, which skips subjectively trivial frames by structural similarity (SSIM) measurement between the original frame and the interpolated frame via motion vector (MV) copy scheme. The saved bits from skipped frames are allocated to coded key ones to enhance their spatial quality, and the skipped frames are well recovered based on MV copy scheme from adjacent key ones at the decoder side to maintain constant frame rate. Experimental results show that the proposed active SSIM-based frameskip scheme acquires better and more consistent spatial-temporal quality both in objective (PSNR) and subjective (SSIM) sense with low complexity compared to classic fixed frame rate control method JVT-G012 and prior objective metric based frameskip method.