• Title/Summary/Keyword: Performance video content

Search Result 196, Processing Time 0.026 seconds

Deep Learning based Loss Recovery Mechanism for Video Streaming over Mobile Information-Centric Network

  • Han, Longzhe;Maksymyuk, Taras;Bao, Xuecai;Zhao, Jia;Liu, Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4572-4586
    • /
    • 2019
  • Mobile Edge Computing (MEC) and Information-Centric Networking (ICN) are essential network architectures for the future Internet. The advantages of MEC and ICN such as computation and storage capabilities at the edge of the network, in-network caching and named-data communication paradigm can greatly improve the quality of video streaming applications. However, the packet loss in wireless network environments still affects the video streaming performance and the existing loss recovery approaches in ICN does not exploit the capabilities of MEC. This paper proposes a Deep Learning based Loss Recovery Mechanism (DL-LRM) for video streaming over MEC based ICN. Different with existing approaches, the Forward Error Correction (FEC) packets are generated at the edge of the network, which dramatically reduces the workload of core network and backhaul. By monitoring network states, our proposed DL-LRM controls the FEC request rate by deep reinforcement learning algorithm. Considering the characteristics of video streaming and MEC, in this paper we develop content caching detection and fast retransmission algorithm to effectively utilize resources of MEC. Experimental results demonstrate that the DL-LRM is able to adaptively adjust and control the FEC request rate and achieve better video quality than the existing approaches.

TsCNNs-Based Inappropriate Image and Video Detection System for a Social Network

  • Kim, Youngsoo;Kim, Taehong;Yoo, Seong-eun
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.677-687
    • /
    • 2022
  • We propose a detection algorithm based on tree-structured convolutional neural networks (TsCNNs) that finds pornography, propaganda, or other inappropriate content on a social media network. The algorithm sequentially applies the typical convolutional neural network (CNN) algorithm in a tree-like structure to minimize classification errors in similar classes, and thus improves accuracy. We implemented the detection system and conducted experiments on a data set comprised of 6 ordinary classes and 11 inappropriate classes collected from the Korean military social network. Each model of the proposed algorithm was trained, and the performance was then evaluated according to the images and videos identified. Experimental results with 20,005 new images showed that the overall accuracy in image identification achieved a high-performance level of 99.51%, and the effectiveness of the algorithm reduced identification errors by the typical CNN algorithm by 64.87 %. By reducing false alarms in video identification from the domain, the TsCNNs achieved optimal performance of 98.11% when using 10 minutes frame-sampling intervals. This indicates that classification through proper sampling contributes to the reduction of computational burden and false alarms.

A Semantic-based Video Retrieval System using Method of Automatic Annotation Update and Multi-Partition Color Histogram (자동 주석 갱신 및 멀티 분할 색상 히스토그램 기법을 이용한 의미기반 비디오 검색 시스템)

  • 이광형;전문석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1133-1141
    • /
    • 2004
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantic-based retrieval method can be available for various query of users. In this paper, we propose semantic-based video retrieval system which support semantic retrieval of various users by feature-based retrieval and annotation-based retrieval of massive video data. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user become query image and searches the most similar key frame through feature based retrieval method that propose. From experiment, the designed and implemented system showed high precision ratio in performance assessment more than 90 percents.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

A Study on Fingerprinting Robustness Indicators for Immersive 360-degree Video (실감형 360도 영상 특징점 기술 강인성 지표에 관한 연구)

  • Kim, Youngmo;Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.743-753
    • /
    • 2020
  • In this paper, we propose a set of robustness indicators for immersive 360-degree video. With the full-fledged service of mobile carriers' 5G networks, it is possible to use large-capacity, immersive 360-degree videos at high speed anytime, anywhere. Since it can be illegally distributed in web-hard and torrents through DRM dismantling and various video modifications, however, evaluation indicators that can objectively evaluate the filtering performance for copyright protection are required. In this paper, a robustness indicators is proposed that applies the existing 2D Video robustness indicators and considers the projection method and reproduction method, which are the characteristics of Immersive 360-degree Video. The performance evaluation experiment has been carried out for a sample filtering system and it is verified that an excellent recognition rate of 95% or more has been achieved in about 3 second execution time.

Video classifier with adaptive blur network to determine horizontally extrapolatable video content (적응형 블러 기반 비디오의 수평적 확장 여부 판별 네트워크)

  • Minsun Kim;Changwook Seo;Hyun Ho Yun;Junyong Noh
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.99-107
    • /
    • 2024
  • While the demand for extrapolating video content horizontally or vertically is increasing, even the most advanced techniques cannot successfully extrapolate all videos. Therefore, it is important to determine if a given video can be well extrapolated before attempting the actual extrapolation. This can help avoid wasting computing resources. This paper proposes a video classifier that can identify if a video is suitable for horizontal extrapolation. The classifier utilizes optical flow and an adaptive Gaussian blur network, which can be applied to flow-based video extrapolation methods. The labeling for training was rigorously conducted through user tests and quantitative evaluations. As a result of learning from this labeled dataset, a network was developed to determine the extrapolation capability of a given video. The proposed classifier achieved much more accurate classification performance than methods that simply use the original video or fixed blur alone by effectively capturing the characteristics of the video through optical flow and adaptive Gaussian blur network. This classifier can be utilized in various fields in conjunction with automatic video extrapolation techniques for immersive viewing experiences.

Efficient Video Retrieval Scheme with Luminance Projection Model (휘도투시모델을 적용한 효율적인 비디오 검색기법)

  • Kim, Sang Hyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8649-8653
    • /
    • 2015
  • A number of video indexing and retrieval algorithms have been proposed to manage large video databases efficiently. The video similarity measure is one of most important technical factor for video content management system. In this paper, we propose the luminance characteristics model to measure the video similarity efficiently. Most algorithms for video indexing have been commonly used histograms, edges, or motion features, whereas in this paper, the proposed algorithm is employed an efficient similarity measure using the luminance projection. To index the video sequences effectively and to reduce the computational complexity, we calculate video similarity using the key frames extracted by the cumulative measure, and compare the set of key frames using the modified Hausdorff distance. Experimental results show that the proposed luminance projection model yields the remarkable improved accuracy and performance than the conventional algorithm such as the histogram comparison method, with the low computational complexity.

Design and Implementation of Content-based Video Database using an Integrated Video Indexing Method (통합된 비디오 인덱싱 방법을 이용한 내용기반 비디오 데이타베이스의 설계 및 구현)

  • Lee, Tae-Dong;Kim, Min-Koo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.661-683
    • /
    • 2001
  • There is a rapid increase in the use of digital video information in recent years, it becomes more important to manage video databases efficiently. The development of high speed data network and digital techniques has emerged new multimedia applications such as internet broadcasting, Video On Demand(VOD) combined with video data processing and computer. Video database should be construct for searching fast, efficient video be extract the accurate feature information of video with more massive and more complex characteristics. Video database are essential differences between video databases and traditional databases. These differences lead to interesting new issues in searching of video, data modeling. So, cause us to consider new generation method of database, efficient retrieval method of video. In this paper, We propose the construction and generation method of the video database based on contents which is able to accumulate the meaningful structure of video and the prior production information. And by the proposed the construction and generation method of the video database implemented the video database which can produce the new contents for the internet broadcasting centralized on the video database. For this production, We proposed the video indexing method which integrates the annotation-based retrieval and the content-based retrieval in order to extract and retrieval the feature information of the video data using the relationship between the meaningful structure and the prior production information on the process of the video parsing and extracting the representative key frame. We can improve the performance of the video contents retrieval, because the integrated video indexing method is using the content-based metadata type represented in the low level of video and the annotation-based metadata type impressed in the high level which is difficult to extract the feature information of the video at he same time.

  • PDF

Performance Evaluation of Differentiated Services to MPEG-4 FGS Video Streaming (MPEC-4 FGS 비디오 스트리밍에 대한 네트워크 차별화 서비스의 성능분석)

  • 신지태;김종원
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.7A
    • /
    • pp.711-723
    • /
    • 2002
  • A finer granular scalable (FGS) version of ISO/IEC MPEG-4 video streaming is investigated in this work with the prioritized stream delivery over loss-rate differentiated networks. Our proposed system is focused on the seamless integration of rate adaptation, prioritized packetization, and simplified differentiation for the MPEG-4 FGS video streaming. The proposed system consists of three key components: 1) rate adaptation with scalable source encoding, 2) content-aware prioritized packetization, and 3) loss-based differential forwarding. More specifically, a constant-quality rate adaptation is first achieved by optimally truncating the over-coded FGS stream based on the embedding rate-distortion (R-D) information (obtained from a piecewise linear R-D model). The rate-controlled video stream is then packetized and prioritized according to the loss impact of each packet. Prioritized packets are transmitted over the underlying network, where packets are subject to differentiated dropping and forwarding. By focusing on the end-to-end quality, we establish an effective working conditions for the proposed video streaming and the superior performance is verified by simulated MPEG-4 FGS video streaming.

Analysis of Uniqueness and Robustness Properties of Ordinal Signature for Video Matching (비디오 정합을 위한 오디널 특징의 유일성 및 강건성 분석)

  • Jeong Kwang-Min;Kim Jeong-Yeop;Hyun Ki-Ho;Ha Yeong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.576-584
    • /
    • 2006
  • Content-based video matching is measuring a similarity of video signature compared to the original clip and copies of media. Specially, it is very important to match the exact frame position, but it depends on frame rate, noise condition and compression format of video. Ordinal signature shows good performance than other video signatures under normal condition but the previous didn't try to find the uniqueness and robustness. Hua et al. performed a uniqueness test under compressed in different formats or frame size. However, they used other compression format image instead of noise in robustness test. This paper proposes robustness test method using several noise models and analyzes the performance of robustness and uniqueness.

  • PDF