• Title/Summary/Keyword: Video-based

Search Result 5,508, Processing Time 0.031 seconds

Low-Complexity MPEG-4 Shape Encoding towards Realtime Object-Based Applications

  • Jang, Euee-Seon
    • ETRI Journal
    • /
    • v.26 no.2
    • /
    • pp.122-135
    • /
    • 2004
  • Although frame-based MPEG-4 video services have been successfully deployed since 2000, MPEG-4 video coding is now facing great competition in becoming a dominant player in the market. Object-based coding is one of the key functionalities of MPEG-4 video coding. Real-time object-based video encoding is also important for multimedia broadcasting for the near future. Object-based video services using MPEG-4 have not yet made a successful debut due to several reasons. One of the critical problems is the coding complexity of object-based video coding over frame-based video coding. Since a video object is described with an arbitrary shape, the bitstream contains not only motion and texture data but also shape data. This has introduced additional complexity to the decoder side as well as to the encoder side. In this paper, we have analyzed the current MPEG-4 video encoding tools and proposed efficient coding technologies that reduce the complexity of the encoder. Using the proposed coding schemes, we have obtained a 56 percent reduction in shape-coding complexity over the MPEG-4 video reference software (Microsoft version, 2000 edition).

  • PDF

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

Layer based Cooperative Relaying Algorithm for Scalable Video Transmission over Wireless Video Sensor Networks (무선 비디오 센서 네트워크에서 스케일러블 비디오 전송을 위한 계층 기반 협업 중계 알고리즘*)

  • Ha, Hojin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.4
    • /
    • pp.13-21
    • /
    • 2022
  • Recently, in wireless video sensor networks(WVSN), various schemes for efficient video data transmission have been studied. In this paper, a layer based cooperative relaying(LCR) algorithm is proposed for minimizing scalable video transmission distortion from packet loss in WVSN. The proposed LCR algorithm consists of two modules. In the first step, a parameter based error propagation metric is proposed to predict the effect of each scalable layer on video quality degradation at low complexity. In the second step, a layer-based cooperative relay algorithm is proposed to minimize distortion due to packet loss using the proposed error propagation metric and channel information of the video sensor node and relay node. In the experiment, the proposed algorithm showed that the improvement of peak signal-to-noise ratio (PSNR) in various channel environments, compared to the previous algorithm(Energy based Cooperative Relaying, ECR) without considering the metric of error propagation.The proposed LCR algorithm minimizes video quality degradation from packet loss using both the channel information of relaying node and the amount of layer based error propagation in scalable video.

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

(Content-Based Video Copy Detection using Motion Directional Histogram) (모션의 방향성 히스토그램을 이용한 내용 기반 비디오 복사 검출)

  • 현기호;이재철
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.497-502
    • /
    • 2003
  • Content-based video copy detection is a complementary approach to watermarking. As opposed to watermarking, which relies on inserting a distinct pattern into the video stream, video copy detection techniques match content-based signatures to detect copies of video. Existing typical content-based copy detection schemes have relied on image matching which is based on key frame detection. This paper proposes a motion directional histogram, which is quantized and accumulated the direction of motion, for video copy detection. The video clip is represented by a motion directional histogram as a 1-dimensional graph. This method is suitable for real time indexing and counting the TV CF verification that is high motion video clips.

A Video Cache Replacement Scheme based on Local Video Popularity and Video Size for MEC Servers

  • Liu, Pingshan;Liu, Shaoxing;Cai, Zhangjing;Lu, Dianjie;Huang, Guimin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.3043-3067
    • /
    • 2022
  • With the mobile traffic in the network increases exponentially, multi-access edge computing (MEC) develops rapidly. MEC servers are deployed geo-distribution, which serve many mobile terminals locally to improve users' QoE (Quality of Experience). When the cache space of a MEC server is full, how to replace the cached videos is an important problem. The problem is also called the cache replacement problem, which becomes more complex due to the dynamic video popularity and the varied video sizes. Therefore, we proposed a new cache replacement scheme based on local video popularity and video size to solve the cache replacement problem of MEC servers. First, we built a local video popularity model, which is composed of a popularity rise model and a popularity attenuation model. Furthermore, the popularity attenuation model incorporates a frequency-dependent attenuation model and a frequency-independent attenuation model. Second, we formulated a utility based on local video popularity and video size. Moreover, the weights of local video popularity and video size were quantitatively analyzed by using the information entropy. Finally, we conducted extensive simulation experiments based on the proposed scheme and some compared schemes. The simulation results showed that our proposed scheme performs better than the compared schemes in terms of hit rate, average delay, and server load under different network configurations.

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

A Multiple Features Video Copy Detection Algorithm Based on a SURF Descriptor

  • Hou, Yanyan;Wang, Xiuzhen;Liu, Sanrong
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.502-510
    • /
    • 2016
  • Considering video copy transform diversity, a multi-feature video copy detection algorithm based on a Speeded-Up Robust Features (SURF) local descriptor is proposed in this paper. Video copy coarse detection is done by an ordinal measure (OM) algorithm after the video is preprocessed. If the matching result is greater than the specified threshold, the video copy fine detection is done based on a SURF descriptor and a box filter is used to extract integral video. In order to improve video copy detection speed, the Hessian matrix trace of the SURF descriptor is used to pre-match, and dimension reduction is done to the traditional SURF feature vector for video matching. Our experimental results indicate that video copy detection precision and recall are greatly improved compared with traditional algorithms, and that our proposed multiple features algorithm has good robustness and discrimination accuracy, as it demonstrated that video detection speed was also improved.

A new approach for content-based video retrieval

  • Kim, Nac-Woo;Lee, Byung-Tak;Koh, Jai-Sang;Song, Ho-Young
    • International Journal of Contents
    • /
    • v.4 no.2
    • /
    • pp.24-28
    • /
    • 2008
  • In this paper, we propose a new approach for content-based video retrieval using non-parametric based motion classification in the shot-based video indexing structure. Our system proposed in this paper has supported the real-time video retrieval using spatio-temporal feature comparison by measuring the similarity between visual features and between motion features, respectively, after extracting representative frame and non-parametric motion information from shot-based video clips segmented by scene change detection method. The extraction of non-parametric based motion features, after the normalized motion vectors are created from an MPEG-compressed stream, is effectively fulfilled by discretizing each normalized motion vector into various angle bins, and by considering the mean, variance, and direction of motion vectors in these bins. To obtain visual feature in representative frame, we use the edge-based spatial descriptor. Experimental results show that our approach is superior to conventional methods with regard to the performance for video indexing and retrieval.

Neural Network based Video Coding in JVET

  • Choi, Kiho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.7
    • /
    • pp.1021-1033
    • /
    • 2022
  • After the Versatile Video Coding (VVC)/H.266 standard was completed, the Joint Video Exploration Team (JVET) began to investigate new technologies that could significantly increase coding gain for the next generation video coding standard. One direction is to investigate signal processing based tools, while the other is to investigate Neural Network based technology. Neural Network based Video Coding (NNVC) has not been studied previously, and this is the first trial of such an approach in the standard group. After two years of research, JVET produced the first common software called Neural Compression Software (NCS) with two NN-based in-loop filtering tools at the 27th meeting and began to maintain NN-based technologies for the common experiment. The coding performances of the two filters in NCS-1.0 are shown to be 8.71% and 9.44% on average in a random access scenario, respectively. All the material related to NCS can be found in the repository of the JVET. In this paper, we provide a brief overview and review of the NNVC activity studied in JVET in order to provide trend and insight for the new direction of video coding standard.