• Title/Summary/Keyword: Video media

Search Result 2,653, Processing Time 0.027 seconds

Adaptive Scanning Method for Fine Granularity Scalable Video Coding

  • Park, Gwang-Hoon;Kim, Kyu-Heon
    • ETRI Journal
    • /
    • v.26 no.4
    • /
    • pp.332-343
    • /
    • 2004
  • One of the recent and most significant technical properties can be expressed as "digital convergence," which is helping lead the technical paradigm into a ubiquitous environment. As an initial trial of realizing a ubiquitous environment, the convergence between broadcasting and telecommunication fields is now on the way, where it is required to develop a scalable video coding scheme for one-source and multi-use media. Traditional scalable video coding schemes have, however, limitations for higher stable picture quality especially on the region of interest. Therefore, this paper introduces an adaptive scanning method especially designed for a higher regional-stable picture quality under a ubiquitous video coding environment, which can improve the subjective quality of the decoded video by most-preferentially encoding, transmitting, and decoding the top-priority image information of the region of interest. Thus, the video can be more clearly visible to users. From various simulation results, the proposed scanning method in this paper can achieve an improved subjective picture quality far better than the widely used raster scan order in conventional video coding schemes, especially on the region of interest, and without a significant loss of quality in the left-over region.

  • PDF

SPATIOTEMPORAL MARKER SEARCHING METHOD IN VIDEO STREAM

  • Shimizu, Noriyuki;Miyao, Jun'ichi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.812-815
    • /
    • 2009
  • This paper discusses a searching method for special markers attached with persons in a surveillance video stream. The marker is a small plate with infrared LEDs, which is called a spatiotemporal marker because it shows a 2-D sequential pattern synchronized with video frames. The search is based on the motion vectors which is the same as one in video compression. The experiments using prototype markers show that the proposed method is practical. Though the method is applicable to a video stream independently, it can decrease total computation cost if motion vector analyses of a video compression and the proposed method is unified.

  • PDF

Adaptive Temporal Rate Control of Video Objects for Scalable Transmission

  • Chang, Hee-Dong;Lim, Young-Kwon;Lee, Myoung-Ho;Ahan, Chieteuk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.43-48
    • /
    • 1997
  • The video transmission for real-time viewing over the Internet is a core operation for the multimedia services. However, its realization is very difficult because the Internet has two major problems, namely, very narrow endpoint-bandwidth and the network jitter. We already proposed a scalable video transmission method in [8] which used MPEG-4 video VM(Verification Model) 2.0[3] for very low bit rate coding and an adaptive temporal rate control of video objects to overcome the network jitter problem. In this paper, we present the improved adaptive temporal rate control scheme for the scalable transmission. Experimental results for three test video sequences show that the adaptive temporal rate control can transfer the video bitstream at source frame rate under variable network condition.

  • PDF

Neural Network based Video Coding in JVET

  • Choi, Kiho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.7
    • /
    • pp.1021-1033
    • /
    • 2022
  • After the Versatile Video Coding (VVC)/H.266 standard was completed, the Joint Video Exploration Team (JVET) began to investigate new technologies that could significantly increase coding gain for the next generation video coding standard. One direction is to investigate signal processing based tools, while the other is to investigate Neural Network based technology. Neural Network based Video Coding (NNVC) has not been studied previously, and this is the first trial of such an approach in the standard group. After two years of research, JVET produced the first common software called Neural Compression Software (NCS) with two NN-based in-loop filtering tools at the 27th meeting and began to maintain NN-based technologies for the common experiment. The coding performances of the two filters in NCS-1.0 are shown to be 8.71% and 9.44% on average in a random access scenario, respectively. All the material related to NCS can be found in the repository of the JVET. In this paper, we provide a brief overview and review of the NNVC activity studied in JVET in order to provide trend and insight for the new direction of video coding standard.

Video based Point Cloud Compression with Versatile Video Coding (Versatile Video Coding을 활용한 Video based Point Cloud Compression 방법)

  • Gwon, Daeheyok;Han, Heeji;Choi, Haechul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.497-499
    • /
    • 2020
  • 포인트 클라우드는 다수의 3D 포인터를 사용한 3D 데이터의 표현 방식 중 하나이며, 멀티미디어 획득 및 처리 기술의 발전에 따라 다양한 분야에서 주목하고 있는 기술이다. 특히 포인트 클라우드는 3D 데이터를 정밀하게 수집하고 표현할 수 있는 장점을 가진다. 하지만 포인트 클라우드는 방대한 양의 데이터를 가지고 있어 효율적인 압축이 필수적이다. 이에 따라 국제 표준화 단체인 Moving Picture Experts Group에서는 포인트 클라우드 데이터의 효율적인 압축을 위하여 Video based Point Cloud Compression(V-PCC)와 Geometry based Point Cloud Coding에 대한 표준을 제정하고 있다. 이 중 V-PCC는 기존 High Efficiency Video Coding(HEVC) 표준을 활용하여 포인트 클라우드를 압축하여 활용성이 높다는 장점이 있다. 본 논문에서는 V-PCC에 사용하는 HEVC 코덱을 2020년 7월 표준화 완료될 예정인 Versatile Video Coding으로 대체하여 V-PCC의 압축 성능을 더 개선할 수 있음을 보인다.

  • PDF

Objective Video Quality Assessment for Stereoscopic Video (스테레오 비디오의 객관적 화질평가 모델 연구)

  • Seo, Jung-Dong;Kim, Dong-Hyun;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.2
    • /
    • pp.197-209
    • /
    • 2009
  • Stereoscopic video delivers depth perception to users contrary to 2D video. Therefore, we need to develop a new video quality assessment model for stereoscopic video. In this paper, we propose a new method for objective assessment of stereoscopic video. The proposed method detects blocking artifacts and degradation in edge regions such as in conventional video quality assessment model. And it detects video quality difference between views using depth information for efficient quality prediction. We performed subjective assessment of stereoscopic video to check the performance of the proposed method, and we confirmed that the proposed algorithm is superior to the existing method in PSNR in respect to correlation with results of the subjective assessment.

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

The Characteristics and Future Trends of Short-Form Animation (숏폼 애니메이션의 특성과 발전방향에 관한 연구)

  • Lee, Sun-Ju;Han, Je-Sung
    • Cartoon and Animation Studies
    • /
    • s.38
    • /
    • pp.29-51
    • /
    • 2015
  • With the progress in high speed internet networks, mobile devices and social networking, the eco-system of the media has shifted from that where the flow of content was one-way from the producer to the consumer. A so-called 'prosumer' culture has taken root where the consumer himself produces media content. Along with these trends, various video-sharing platforms such as youtube has a method of allocating advertisement profit to the content producer, offering a win-win platform for content pro-sumers. This allows the channels to attract several tens of millions of subscribers and raise an annual income of over 10 billion Won, marking a revolutionary change in the content industry. This paper seeks to analyze video distribution channels and short-form media content that are showing continuous growth to identify new markets where animated content can make progress in an era of online video media platforms, as well as provide a future direction for small teams of creators of animated films to survive and thrive in this environment.

A Study on the Interest of SNS Users according to New Media Fashion Content Types -Focus on Vogue Korea's Official Instagram- (뉴미디어 패션 콘텐츠 유형에 따른 사용자의 SNS 관심도 연구 -보그 코리아 공식 인스타그램 중심으로-)

  • Lee, Chungsun;Lee, Seunghee
    • Journal of Fashion Business
    • /
    • v.24 no.1
    • /
    • pp.75-87
    • /
    • 2020
  • The purpose of this study is to find trends in new media fashion content by analyzing the fashion content of the official Instagram accounts of domestic fashion magazines that are being transformed by digital media. The framework for these analysis of fashion content type and methods of production is based on one used in an earlier research project. Empirical analysis is conducted on Vogue Korea's official Instagram accounts, using the highest number of major views as the secondary measure of interest. After screening for fashion content in posts on the Vogue Korea account for four months, 291 short video postings were extracted to analyze the number of views the postings received. The results were categorized as 'star', 'show/exhibition', 'product', 'shop', 'fashion film', 'designer', or 'event', included in the data are the number of postings by type and the number of views by post. Based on the characteristics of the creator and the editing, the posts were classified into 'professional production highlight', 'professional production private', 'UCC' or 'GIF' videos, the number of views per post were also collected. The research results show different levels of interest depending on the type of fashion content, and also on the way the videos were produced. The study also investigated how the combination of these two factors affects interest. When producing a new media fashion content, combining a 'star' type post with 'professional production private' video content was most popular. The selection of production method is therefore important even given the same type of content.

A Case Study on 'Visual Affordance' of Short Form Video in Smart Media (스마트미디어 초단편 영상의 '시각 유도성' 사례 연구)

  • Kim, Hyunsook;Moon, Jaecheol
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.7
    • /
    • pp.130-137
    • /
    • 2019
  • With the advent of smart media, short and fast-paced video content appeared, and conditions for viewing changed to viewing in environments where the perception is dispersed due to distracting and complicated external situations in a short period of time. Accordingly, smart media videos are quickly delivering meaning while keeping the eyes of viewers who lack patience. Our eyes and brain have a hard time accepting image information that flows through the constraints of a small screen. Our visual perception is limited in terms of acceptable visual information and, in particular, less cognition in moving images, so the production of smart media images should be directed in a way that enhances perceptual understanding. To be able to effectively communicate what you want to talk about while reducing the visual burden, intuitive image comprehension is needed by applying intuitiveness and behavioral induction to the moving images. Close-up shot, stable structure such as frontality and three-division structure, and color have such 'visual affordance' Therefore we need to use that device appropriately.