• Title/Summary/Keyword: Video Information

Search Result 6,882, Processing Time 0.028 seconds

A Unified Framework of Information Needs and Perceived Barriers in Interactive Video Retrieval

  • Albertson, Dan
    • Journal of Information Science Theory and Practice
    • /
    • v.4 no.4
    • /
    • pp.4-15
    • /
    • 2016
  • Information needs of users have been examined both generally and as they pertain to particular types and formats of information. Barriers to information have also been investigated, including those which are situational and also across certain domains and socioeconomic contexts. Unified studies concerning both information needs and barriers are needed. Both are likely always present in any given interactive search situation; further, users' attempts to satisfy their own individualized information needs will likely encounter barriers of some sort. The present study employed a survey method to collect users' perceptions of video information needs and barriers as part of recent video search situations. Findings from this analysis establish a unified framework, based on the themes emerging directly from the responses of users, and present the suitability and benefit for informing future designs and evaluations of user-centered interactive retrieval tools.

Realtime Video Visualization based on 3D GIS (3차원 GIS 기반 실시간 비디오 시각화 기술)

  • Yoon, Chang-Rak;Kim, Hak-Cheol;Kim, Kyung-Ok;Hwang, Chi-Jung
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.63-70
    • /
    • 2009
  • 3D GIS(Geographic Information System) processes, analyzes and presents various real-world 3D phenomena by building 3D spatial information of real-world terrain, facilities, etc., and working with visualization technique such as VR(Virtual Reality). It can be applied to such areas as urban management system, traffic information system, environment management system, disaster management system, ocean management system, etc,. In this paper, we propose video visualization technology based on 3D geographic information to provide effectively real-time information in 3D geographic information system and also present methods for establishing 3D building information data. The proposed video visualization system can provide real-time video information based on 3D geographic information by projecting real-time video stream from network video camera onto 3D geographic objects and applying texture-mapping of video frames onto terrain, facilities, etc.. In this paper, we developed sem i-automatic DBM(Digital Building Model) building technique using both aerial im age and LiDAR data for 3D Projective Texture Mapping. 3D geographic information system currently provide static visualization information and the proposed method can replace previous static visualization information with real video information. The proposed method can be used in location-based decision-making system by providing real-time visualization information, and moreover, it can be used to provide intelligent context-aware service based on geographic information.

  • PDF

A study on performance evaluation of DVCs with different coding method and feasibility of spatial scalable DVC (분산 동영상 코딩의 코딩 방식에 따른 성능 평가와 공간 계층화 코더로서의 가능성에 대한 연구)

  • Kim, Dae-Yeon;Park, Gwang-Hoon;Kim, Kyu-Heon;Suh, Doug-Young
    • Journal of Broadcast Engineering
    • /
    • v.12 no.6
    • /
    • pp.585-595
    • /
    • 2007
  • Distributed video coding is a new video coding paradigm based on Slepian-Wolf and Wyner-Ziv's information theory Distributed video coding whose decoder exploits side information transfers its computational burden from encoder to decoder, so that encoding with light computational power can be realized. RD performance is superior than that of standard video coding without motion compensation process but still has a gap with that of coding with motion compensation process. This parer introduces basic theory of distributed video coding and its structure and then shows RD performances of DVCs whose coding style is different from each other and of a DVC as a spatial scalable video coder.

Hash Based Equality Analysis of Video Files with Steganography of Identifier Information

  • Lee, Wan Yeon;Choi, Yun-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.17-25
    • /
    • 2022
  • Hash functions are widely used for fast equality analysis of video files because of their fixed small output sizes regardless of their input sizes. However, the hash function has the possibility of a hash collision in which different inputs derive the same output value, so there is a problem that different video files may be mistaken for the same file. In this paper, we propose an equality analysis scheme in which different video files always derive different output values using identifier information and double hash. The scheme first extracts the identifier information of an original video file, and attaches it into the end of the original file with a steganography method. Next the scheme calculates two hash output values of the original file and the extended file with attached identifier information. Finally the scheme utilizes the identifier information, the hash output value of the original file, and the hash output value of the extended file for the equality analysis of video files. For evaluation, we implement the proposed scheme into a practical software tool and show that the proposed scheme performs well the equality analysis of video files without hash collision problem and increases the resistance against the malicious hash collision attack.

Face Detection and Matching for Video Indexing (비디오 인덱싱을 위한 얼굴 검출 및 매칭)

  • Islam Mohammad Khairul;Lee Sun-Tak;Yun Jae-Yoong;Baek Joong-Hwan
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.45-48
    • /
    • 2006
  • This paper presents an approach to visual information based temporal indexing of video sequences. The objective of this work is the integration of an automatic face detection and a matching system for video indexing. The face detection is done using color information. The matching stage is based on the Principal Component Analysis (PCA) followed by the Minimax Probability Machine (MPM). Using PCA one feature vector is calculated for each face which is detected at the previous stage from the video sequence and MPM is applied to these feature vectors for matching with the training faces which are manually indexed after extracting from video sequences. The integration of the two stages gives good results. The rate of 86.3% correctly classified frames shows the efficiency of our system.

  • PDF

Design of video ontology for semantic web service (시맨틱 웹 서비스를 위한 동영상 온톨로지 설계)

  • Lee, Young-seok;Youn, Sung-dae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.195-198
    • /
    • 2009
  • Recently, research in building up semantic web for exchanging information and knowledge is active. To make use of video contents as knowledge on semantic web, semantic-based retrieval should be preceded. At present, retrieval based on consentaneity between metadata and keyword is common used. In this paper, I propose ontolgy establishment which enlarge user participation and add usefulness value and history information. This will facilitate semantic retrieval as well as use of video contents by using collective Intelligence. The proposed ontology schema will allow semantic-based retrieval of video contents on semantic web get higher recall compared to current way of retrieval. Moreover it enables you to make use of various video contents as knowledge.

  • PDF

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Rotated Video Detection using Multi Region Binary Patterns (이중 영역 이진 패턴을 이용한 회전된 비디오 검출)

  • Kim, Semin;Lee, Seungho;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1070-1075
    • /
    • 2014
  • Due to a number of illegal copied videos, many video content markets have been threatened. Since this copied videos have intercepted the profits of the content holders, content developers lose the will to generate new contents. Therefore, video copy detection approaches have been developed to protect the copyrights of video contents. However, many illegal uploader who generate copied videos used video transformations to avoid video copy detection systems. Among of the video transformations, rotation and flipping did not distorted the quality of video contents. Thus, these two video transformations were adopt to generate copied video. In order to detect rotated or flipping copy videos, rotation and flipping robust region binary pattern (RFR) recently was proposed. But, this RFR has a weakness according to rotated angles. Therefore, in order to overcome this problem, multi region binary patterns are proposed in this paper. The proposed method has the similar performance with the original RFR. But, it showed much higher efficiency for memory spaces.

Creation of Soccer Video Highlights Using Caption Information (자막 정보를 이용한 축구 비디오 하이라이트 생성)

  • Shin Seong-Yoon;Kang Il-Ko;Rhee Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.65-76
    • /
    • 2005
  • A digital video is a very long data that requires large-capacity storage space. As such, prior to watching a long original video, video watchers want to watch a summarized version of the video. In the field of sports, in particular, highlights videos are frequently watched. In short, a highlights video allows a video watcher to determine whether the highlights video is well worth watching. This paper proposes a scheme for creating soccer video highlights using the structural features of captions in terms of time and space. Such structural features are used to extract caption frame intervals and caption keyframes. A highlights video is created through resetting shots for caption keyframes, by means of logical indexing, and through the use of the rule for creating highlights. Finally, highlights videos and video segments can be searched and browsed in a way that allows the video watcher to select his/her desired items from the browser.

  • PDF

Video Watermarking Algorithm for H.264 Scalable Video Coding

  • Lu, Jianfeng;Li, Li;Yang, Zhenhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.56-67
    • /
    • 2013
  • Because H.264/SVC can meet the needs of different networks and user terminals, it has become more and more popular. In this paper, we focus on the spatial resolution scalability of H.264/SVC and propose a blind video watermarking algorithm for the copyright protection of H.264/SVC coded video. The watermark embedding occurs before the H.264/SVC encoding, and only the original enhancement layer sequence is watermarked. However, because the watermark is embedded into the average matrix of each macro block, it can be detected in both the enhancement layer and base layer after downsampling, video encoding, and video decoding. The proposed algorithm is examined using JSVM, and experiment results show that is robust to H.264/SVC coding and has little influence on video quality.