• Title/Summary/Keyword: Video sequence

Search Result 507, Processing Time 0.028 seconds

Using the fusion of spatial and temporal features for malicious video classification (공간과 시간적 특징 융합 기반 유해 비디오 분류에 관한 연구)

  • Jeon, Jae-Hyun;Kim, Se-Min;Han, Seung-Wan;Ro, Yong-Man
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.365-374
    • /
    • 2011
  • Recently, malicious video classification and filtering techniques are of practical interest as ones can easily access to malicious multimedia contents through the Internet, IPTV, online social network, and etc. Considerable research efforts have been made to developing malicious video classification and filtering systems. However, the malicious video classification and filtering is not still being from mature in terms of reliable classification/filtering performance. In particular, the most of conventional approaches have been limited to using only the spatial features (such as a ratio of skin regions and bag of visual words) for the purpose of malicious image classification. Hence, previous approaches have been restricted to achieving acceptable classification and filtering performance. In order to overcome the aforementioned limitation, we propose new malicious video classification framework that takes advantage of using both the spatial and temporal features that are readily extracted from a sequence of video frames. In particular, we develop the effective temporal features based on the motion periodicity feature and temporal correlation. In addition, to exploit the best data fusion approach aiming to combine the spatial and temporal features, the representative data fusion approaches are applied to the proposed framework. To demonstrate the effectiveness of our method, we collect 200 sexual intercourse videos and 200 non-sexual intercourse videos. Experimental results show that the proposed method increases 3.75% (from 92.25% to 96%) for classification of sexual intercourse video in terms of accuracy. Further, based on our experimental results, feature-level fusion approach (for fusing spatial and temporal features) is found to achieve the best classification accuracy.

An Efficient Scene Change Detection Algorithm Considering Brightness Variation (밝기 변화를 고려한 효율적인 장면전환 검출 알고리즘)

  • Kim Sang-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.2
    • /
    • pp.74-81
    • /
    • 2005
  • As the multimedia data increases, various scene change detection algorithms for video indexing and sequence matching have been proposed to efficiently manage and utilize digital media. In this paper, we propose a robust scene change detection algorithm for video sequences with abrupt luminance variations. To improve the accuracy and to reduce the computational complexity of video indexing with abrupt luminance variations, the proposed algorithm utilizes edge features as well as color features, which yields a remarkably better performance than conventional algorithms. In the proposed algorithm first we extract the candidate shot boundaries using color histograms and then determine using edge matching and luminance compensation if they are shot boundaries or luminance changes. If the scene contains trivial brightness variations, the edge matching and luminance compensation are performed only for shot boundaries. In experimental results, the proposed method gives remarkably a high performance and efficiency than the conventional methods with the similar computational complexity.

  • PDF

A Study on Video Data Protection Method based on MPEG using Dynamic Shuffling (동적 셔플링을 이용한 MPEG기반의 동영상 암호화 방법에 관한 연구)

  • Lee, Ji-Bum;Lee, Kyoung-Hak;Ko, Hyung-Hwa
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.58-65
    • /
    • 2007
  • This dissertation proposes digital video protection algorithm lot moving image based on MPEG. Shuffling-based encryption algorithms using a fixed random shuffling table are quite simple and effective but vulnerable to the chosen plaintext attack. To overcome this problem, it is necessary to change the key used for generation of the shuffling table. However, this may pose a significant burden on the security key management system. A better approach is to generate the shuffling table based on the local feature of an image. In order to withstand the chosen plaintext attack, at first, we propose a interleaving algorithm that is adaptive to the local feature of an image. Secondly, using the multiple shuffling method which is combined interleaving with existing random shuffling method, we encrypted the DPCM processed 8*8 blocks. Experimental results showed that the proposed algorithm needs only 10% time of SEED encryption algorithm and moreover there is no overhead bit. In video sequence encryption, multiple random shuffling algorithms are used to encrypt the DC and AC coefficients of intra frame, and motion vector encryption and macroblock shuffling are used to encrypt the intra-coded macroblock in predicted frame.

  • PDF

Real-Time Rate Control with Token Bucket for Low Bit Rate Video (토큰 버킷을 이용한 낮은 비트율 비디오의 실시간 비트율 제어)

  • Park, Sang-Hyun;Oh, Won-Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2315-2320
    • /
    • 2006
  • A real-time frame-layer rate control algorithm with a token bucket traffic shaper is proposed for low bit rate video coding. The proposed rate control method uses a non-iterative optimization method for low computational complexity, and performs bit allocation at the frame level to minimize the average distortion over an entire sequence as well as variations in distortion between frames. In order to reduce the quality fluctuation, we use a sliding window scheme which does not require the pre-analysis process. Therefore, the proposed algorithm does not produce time delay from encoding, and is suitable for real-time low-complexity video encoder. Experimental results indicate that the proposed control method provides better visual and PSNR performances than the existing rate control method.

Video Shot Retrieval in H.264/AVC compression domain (H.264/AVC 압축 영역에서의 동영상 검색)

  • Byun Ju-Wan;Kim Sung-Min;Won Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.72-78
    • /
    • 2006
  • In this paper, we present a video shot retrieval algorithm in H.264/AVC compression domain. Unlike previous standards such as MPEG-2 and 4, H.264/AVC supports a variable block size for motion compensation. Therefore, existing video retrieval algorithms exploiting the motion vectors in MPEG-2 and 4 domains are not appropriate for H.264/AVC. So, we devise a method to project motion vectors with larger than $4{\times}4$ block sizes into those for the smallest $4{\times}4$ blocks. It also uses correlations among features for the measure of similarity. Experimental results with standard videos of 10558 frames and commercial videos of 48161 frames show that the proposed method yields ANMRR less than 0.2.

Content-Based Image Retrieval Algorithm Using HAQ Algorithm and Moment-Based Feature (HAQ 알고리즘과 Moment 기반 특징을 이용한 내용 기반 영상 검색 알고리즘)

  • 김대일;강대성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.113-120
    • /
    • 2004
  • In this paper, we propose an efficient feature extraction and image retrieval algorithm for content-based retrieval method. First, we extract the object using Gaussian edge detector for input image which is key frames of MPEG video and extract the object features that are location feature, distributed dimension feature and invariant moments feature. Next, we extract the characteristic color feature using the proposed HAQ(Histogram Analysis md Quantization) algorithm. Finally, we implement an retrieval of four features in sequence with the proposed matching method for query image which is a shot frame except the key frames of MPEG video. The purpose of this paper is to propose the novel content-based image retrieval algerian which retrieves the key frame in the shot boundary of MPEG video belonging to the scene requested by user. The experimental results show an efficient retrieval for 836 sample images in 10 music videos using the proposed algorithm.

FPGA Design of Open-Loop Frame Prediction Processor for Scalable Video Coding (스케일러블 비디오 코딩을 위한 Open-Loop 프레임 예측 프로세서의 FPGA 설계)

  • Seo Young-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.5C
    • /
    • pp.534-539
    • /
    • 2006
  • In this paper, we propose a new frame prediction filtering technique and a hardware(H/W) architecture for scalable video coding. We try to evaluate MCTF(motion compensated temporal filtering) and hierarchical B-picture which are a technique for eliminate correlation between video frames. Since the techniques correspond to non-causal system in time, these have fundamental defects which are long latency time and large size of frame buffer. We propose a new architecture to be efficiently implemented by reconfiguring non-causal system to causal system. We use the property of a repetitive arithmetic and propose a new frame prediction filtering cell(FPFC). By expanding FPFC we reconfigure the whole arithmetic architecture. After the operational sequence of arithmetic is analyzed in detail and the causality is imposed to implement in hardware, the unit cell is optimized. A new FPFC kernel was organized as simple as possible by repeatedly arranging the unit cells and a FPFC processor is realized for scalable video coding.

A Study on Guaranteed Quality of Service in Multiplexed MPEG video sources over BcN Network (BcN망에서 다중화된 MPEG 비디오소스의 QoS 보장 방식)

  • Park Joon-Yul;Lee Han-Young
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.3 s.345
    • /
    • pp.78-83
    • /
    • 2006
  • In this paper, we propose Active bandwidth allocation scheme of multiplexed streamed MPEG video sequences over BcN network. In order to real time processing, multiplexed source is estimated by linear-prediction per measurement period. n the result target quality value were not sufficient, we proposed a over-allocation method and a reallocation one to guarantee QoS. We used two kinds of sources, one is random multiplexed source made of four different video sources, the other is the one considered the arrange of I frame in the sequence. With those sources, we analyzed the linear prediction, compared over-allocation with reallocation method. As a result, In both schemes, the objected target quality value is achieved, the sufficient valuce bandwidth under 10% when measurement period is over 1.8 sec, the utilization is over 0.9. Especially, the Target of quality value of the reallocation scheme is better at the same condition.

Vehicle Speed Measurement using SAD Algorithm (SAD 알고리즘을 이용한 차량 속도 측정)

  • Park, Seong-Il;Moon, Jong-Dae;Ko, Young-Hyuk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.73-79
    • /
    • 2014
  • In this paper, we proposed the mechanism which can measure traffic flow and vehicle speed on the highway as well as road by using the video and image processing to detect and track cars in a video sequence. The proposed mechanism uses the first few frames of the video stream to estimate the background image. The visual tracking system is a simple algorithm based on the sum of absolute frame difference. It subtracts the background from each video frame to produce foreground images. By thresholding and performing morphological closing on each foreground image, the proposed mechanism produces binary feature images, which are shown in the threshold window. By measuring the distance between the "first white line" mark and the "second white line"mark proceeding, it is possible to find the car's position. Average velocity is defined as the change in position of an object divided by the time over which the change takes place. The results of proposed mechanism agree well with the measured data, and view the results in real time.

Robust Object Detection from Indoor Environmental Factors (다양한 실내 환경변수로부터 강인한 객체 검출)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.41-46
    • /
    • 2010
  • In this paper, we propose a detection method of reduced computational complexity aimed at separating the moving objects from the background in a generic video sequence. In generally, indoor environments, it is difficult to accurately detect the object because environmental factors, such as lighting changes, shadows, reflections on the floor. First, the background image to detect an object is created. If an object exists in video, on a previously created background images for similarity comparison between the current input image and to detect objects through several operations to generate a mixture image. Mixed-use video and video inputs to detect objects. To complement the objects detected through the labeling process to remove noise components and then apply the technique of morphology complements the object area. Environment variable such as, lighting changes and shadows, to the strength of the object is detected. In this paper, we proposed that environmental factors, such as lighting changes, shadows, reflections on the floor, including the system uses mixture images. Therefore, the existing system more effectively than the object region is detected.