• Title/Summary/Keyword: Natural Scene Statistics

Search Result 5, Processing Time 0.023 seconds

No-reference quality assessment of dynamic sports videos based on a spatiotemporal motion model

  • Kim, Hyoung-Gook;Shin, Seung-Su;Kim, Sang-Wook;Lee, Gi Yong
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.538-548
    • /
    • 2021
  • This paper proposes an approach to improve the performance of no-reference video quality assessment for sports videos with dynamic motion scenes using an efficient spatiotemporal model. In the proposed method, we divide the video sequences into video blocks and apply a 3D shearlet transform that can efficiently extract primary spatiotemporal features to capture dynamic natural motion scene statistics from the incoming video blocks. The concatenation of a deep residual bidirectional gated recurrent neural network and logistic regression is used to learn the spatiotemporal correlation more robustly and predict the perceptual quality score. In addition, conditional video block-wise constraints are incorporated into the objective function to improve quality estimation performance for the entire video. The experimental results show that the proposed method extracts spatiotemporal motion information more effectively and predicts the video quality with higher accuracy than the conventional no-reference video quality assessment methods.

A Natural Scene Statistics Based Publication Classification Algorithm Using Support Vector Machine (서포트 벡터 머신을 이용한 자연 연상 통계 기반 저작물 식별 알고리즘)

  • Song, Hyewon;Kim, Doyoung;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.5
    • /
    • pp.959-966
    • /
    • 2017
  • Currently, the market of digital contents such as e-books, cartoons and webtoons is growing up, but the copyrights infringement are serious issue due to their distribution through illegal ways. However, the technologies for copyright protection are not developed enough. Therefore, in this paper, we propose the NSS-based publication classification method for copyright protection. Using histogram calculated by NSS, we propose classification method for digital contents using SVM. The proposed algorithm will be useful for copyright protection because it lets us distinguish illegal distributed digital contents more easily.

No-Reference Visibility Prediction Model of Foggy Images Using Perceptual Fog-Aware Statistical Features (시지각적 통계 특성을 활용한 안개 영상의 가시성 예측 모델)

  • Choi, Lark Kwon;You, Jaehee;Bovik, Alan C.
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.131-143
    • /
    • 2014
  • We propose a no-reference perceptual fog density and visibility prediction model in a single foggy scene based on natural scene statistics (NSS) and perceptual "fog aware" statistical features. Unlike previous studies, the proposed model predicts fog density without multiple foggy images, without salient objects in a scene including lane markings or traffic signs, without supplementary geographical information using an onboard camera, and without training on human-rated judgments. The proposed fog density and visibility predictor makes use of only measurable deviations from statistical regularities observed in natural foggy and fog-free images. Perceptual "fog aware" statistical features are derived from a corpus of natural foggy and fog-free images by using a spatial NSS model and observed fog characteristics including low contrast, faint color, and shifted luminance. The proposed model not only predicts perceptual fog density for the entire image but also provides local fog density for each patch size. To evaluate the performance of the proposed model against human judgments regarding fog visibility, we executed a human subjective study using a variety of 100 foggy images. Results show that the predicted fog density of the model correlates well with human judgments. The proposed model is a new fog density assessment work based on human visual perceptions. We hope that the proposed model will provide fertile ground for future research not only to enhance the visibility of foggy scenes but also to accurately evaluate the performance of defog algorithms.

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.

No-Reference Sports Video-Quality Assessment Using 3D Shearlet Transform and Deep Residual Neural Network (3차원 쉐어렛 변환과 심층 잔류 신경망을 이용한 무참조 스포츠 비디오 화질 평가)

  • Lee, Gi Yong;Shin, Seung-Su;Kim, Hyoung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1447-1453
    • /
    • 2020
  • In this paper, we propose a method for no-reference quality assessment of sports videos using 3D shearlet transform and deep residual neural networks. In the proposed method, 3D shearlet transform-based spatiotemporal features are extracted from the overlapped video blocks and applied to logistic regression concatenated with a deep residual neural network based on a conditional video block-wise constraint to learn the spatiotemporal correlation and predict the quality score. Our evaluation reveals that the proposed method predicts the video quality with higher accuracy than the conventional no-reference video quality assessment methods.