• Title/Summary/Keyword: video information characteristics

Search Result 586, Processing Time 0.022 seconds

An Exploratory Study on Video Information Literacy (영상정보 활용능력에 관한 탐색적 연구)

  • Min Kyung Na;Jee Yeon Lee
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.2
    • /
    • pp.19-46
    • /
    • 2024
  • In this study, we conducted a literature review and exploratory research to identify the characteristics of recently popular video information and to propose the basic capabilities required for video information literacy. Through a literature review, the distinct characteristics of video information were examined from various perspectives, differentiating it from other types of information. Subsequently, we had one-on-one, in-depth, semi-structured interviews with 16 participants in their teens to 50s to collect their video usage experiences. The interview contents were categorized to create a codebook, and content analysis was performed. Based on this analysis, we derived the characteristics of video information. Finally, the characteristics of video information were identified through the literature review and interview analysis outcomes, and these characteristics were classified into properties of video and characteristics related to video information usage. Based on the identified characteristics of video information, this study proposed the basic capabilities required for video information literacy.

A Study on Feature Information Parsing System of Video Image for Multimedia Service (멀티미디어 서비스를 위한 동영상 이미지의 특징정보 분석 시스템에 관한 연구)

  • 이창수;지정규
    • Journal of Information Technology Applications and Management
    • /
    • v.9 no.3
    • /
    • pp.1-12
    • /
    • 2002
  • Due to the fast development in computer and communication technologies, a video is now being more widely used than ever in many areas. The current information analyzing systems are originally built to process text-based data. Thus, it has little bits problems when it needs to correctly represent the ambiguity of a video, when it has to process a large amount of comments, or when it lacks the objectivity that the jobs require. We would like to purpose an algorithm that is capable of analyze a large amount of video efficiently. In a video, divided areas use a region growing and region merging techniques. To sample the color, we translate the color from RGB to HSI and use the information that matches with the representative colors. To sample the shape information, we use improved moment invariants(IMI) so that we can solve many problems of histogram intersection caused by current IMI and Jain. Sampled information on characteristics of the streaming media will be used to find similar frames.

  • PDF

Adaptive Multiple TCP-connection Scheme to Improve Video Quality over Wireless Networks

  • Kim, Dongchil;Chung, Kwangsue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4068-4086
    • /
    • 2014
  • Due to the prevalence of powerful mobile terminals and the rapid advancements in wireless communication technologies, the wireless video streaming service has become increasingly more popular. Recent studies show that video streaming services via Transmission Control Protocol (TCP) are becoming more practical. TCP has more advantages than User Diagram Protocol (UDP), including firewall traversal, bandwidth fairness, and reliability. However, each video service shares an equal portion of the limited bandwidth because of the fair sharing characteristics inherent in TCP and this bandwidth fair sharing cannot always guarantee the video quality for each user. To solve this challenging problem, an Adaptive Multiple TCP (AM-TCP) scheme is proposed in this paper to guarantee the video quality for mobile devices in wireless networks. AM-TCP adaptively controls the number of TCP connections according to the video Rate Distortion (RD) characteristics of each stream and network status. The proposed scheme can minimize the total distortion of all participating video streams and maximize the service quality by guaranteeing the quality of each video streaming session. The simulation results show that the proposed scheme can significantly improve the quality of video streaming in wireless networks.

Implementation of Effective Automatic Foreground Motion Detection Using Color Information

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.6
    • /
    • pp.131-140
    • /
    • 2017
  • As video equipments such as CCTV are used for various purposes in fields of society, digital video data processing technology such as automatic motion detection is essential. In this paper, we proposed and implemented a more stable and accurate motion detection system based on background subtraction technique. We could improve the accuracy and stability of motion detection over existing methods by efficiently processing color information of digital image data. We divided the procedure of color information processing into each components of color information : brightness component, color component of color information and merge them. We can process each component's characteristics with maximum consideration. Our color information processing provides more efficient color information in motion detection than the existing methods. We improved the success rate of motion detection by our background update process that analyzed the characteristics of the moving background in the natural environment and reflected it to the background image.

Design and Implementation of MPEG-2 Compressed Video Information Management System (MPEG-2 압축 동영상 정보 관리 시스템의 설계 및 구현)

  • Heo, Jin-Yong;Kim, In-Hong;Bae, Jong-Min;Kang, Hyun-Syug
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1431-1440
    • /
    • 1998
  • Video data are retrieved and stored in various compressed forms according to their characteristics, In this paper, we present a generic data model that captures the structure of a video document and that provides a means for indexing a video stream, Using this model, we design and implement CVIMS (the MPEG-2 Compressed Video Information Management System) to store and retrieve video documents, CVIMS extracts I-frames from MPEG-2 files, selects key-frames from the I -frames, and stores in database the index information such as thumbnails, captions, and picture descriptors of the key-frames, And also, CVIMS retrieves MPEG- 2 video data using the thumbnails of key-frames and v31ious labels of queries.

  • PDF

On Rate-adaptive LDPC-based Cross-layer SVC over Bursty Wireless Channels

  • Cho, Yongju;Cha, Jihun;Radha, Hayder;Seo, Kwang-Deok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2266-2284
    • /
    • 2012
  • Recent studies have indicated that a significant improvement in wireless video throughput can be achieved by Cross Layer Design with Side-information (CLDS) protocols. In this paper, we derive the operational rate of a CLDS protocol operating over a realistic wireless channel. Then, a Rate-Distortion (R-D) empirical model for above-capacity Scalable Video Coding (SVC) is deduced to estimate the loss of video quality incurred under inaccurate rate estimation scenarios. Finally, we develop a novel Unequal Error Protection (UEP) scheme which leverages the characteristics of LDPC codes to reduce the distortion of video quality in case of typically-observed burst wireless errors. The efficacy of the proposed rate adaptation architecture over conventional protocols is demonstrated by realistic video simulations using actual IEEE 802.11b wireless traces.

CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events

  • Choe, Giseok;Lee, Seungbin;Nang, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1689-1701
    • /
    • 2019
  • In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC data.

Deep Learning based Loss Recovery Mechanism for Video Streaming over Mobile Information-Centric Network

  • Han, Longzhe;Maksymyuk, Taras;Bao, Xuecai;Zhao, Jia;Liu, Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4572-4586
    • /
    • 2019
  • Mobile Edge Computing (MEC) and Information-Centric Networking (ICN) are essential network architectures for the future Internet. The advantages of MEC and ICN such as computation and storage capabilities at the edge of the network, in-network caching and named-data communication paradigm can greatly improve the quality of video streaming applications. However, the packet loss in wireless network environments still affects the video streaming performance and the existing loss recovery approaches in ICN does not exploit the capabilities of MEC. This paper proposes a Deep Learning based Loss Recovery Mechanism (DL-LRM) for video streaming over MEC based ICN. Different with existing approaches, the Forward Error Correction (FEC) packets are generated at the edge of the network, which dramatically reduces the workload of core network and backhaul. By monitoring network states, our proposed DL-LRM controls the FEC request rate by deep reinforcement learning algorithm. Considering the characteristics of video streaming and MEC, in this paper we develop content caching detection and fast retransmission algorithm to effectively utilize resources of MEC. Experimental results demonstrate that the DL-LRM is able to adaptively adjust and control the FEC request rate and achieve better video quality than the existing approaches.

Cross-layered Video Information Sharing Method and Selective Retransmission Technique for The Efficient Video Streaming Services (효율적인 영상 스트리밍 서비스를 위한 Cross-layer 영상 정보 공유 방법 및 선택적 재전송 기법)

  • Chung, Taewook;Chung, Chulho;Kim, Jaeseok
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.7
    • /
    • pp.853-863
    • /
    • 2015
  • In this paper, we proposed cross-layered approach of video codec and communication system for the efficient video streaming service. Conventional video streaming is served by divided system which consist of video codec layer and communication layer. Its disintegration causes the limitation of the performance of video streaming service. With the cross-layered design, each layer could share the information and the service is able to enhance the performance. And we proposed the selective retransmission method in communication system based on the cross-layered system that reflect the information of encoded video data. Selective retransmission method which consider the characteristics of video data improves the performance of video streaming services. We verified the proposed method with raw format full HD test sequence with H.264/AVC codec and MATLAB simulation. The simulation results show that the proposed method improves about 10% PSNR performance.

Quality Metric with Video Characteristics on Scalable Video Coding (영상 특성을 고려한 스케일러블 비디오 기반 품질 메트릭)

  • Yoo, Ha-Na;Kim, Cheon-Seog;Lee, Ho-Jun;Jin, Sung-Ho;Ro, Yong-Man
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.179-187
    • /
    • 2008
  • In this paper, we proposed the qualify metric based on SVC and the subjective quality. The proposed quality metric is for a general purpose. It means we can use it for any video sequences regardless of its temporal and spatial characteristics. The Quality of Service(QoS) is one of the important issues in heterogeneous environment which has diverse restrictions such as limited network bandwidth and limited display resolution. Scalable Video Coding(SVC) is the efficient video coding skill in heterogeneous environment. Because SVC can be adapted to various quality bitstreams using three scalabilities(spatial, temporal, and SNR) from one bitstream which has full scalability. To maximize the QoS in this environment, we should consider the subjective quality which is the viewer response. And also we should consider temporal and spatial characteristics of video sequence because the subjective quality is affected by temporal and spatial characteristics of video sequence. To verify the efficiency of the proposed method, we perform subjective assessments. The experimental results show that the proposed method has high correlation with subjective quality. The proposed method can be a decision tool of SVC birstream extraction.