• Title/Summary/Keyword: Video Extraction

Search Result 464, Processing Time 0.039 seconds

Video Evaluation System Using Scene Change Detection and User Profile (장면전환검출과 사용자 프로파일을 이용한 비디오 학습 평가 시스템)

  • Shin, Seong-Yoon
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.95-104
    • /
    • 2004
  • This paper proposes an efficient remote video evaluation system that is matched well with personalized characteristics of students using information filtering based on user profile. For making a question in forms of video, a key frame extraction method based on coordinate, size and color information is proposed. And Question-mating intervals are extracted using gray-level histogram difference and time window. Also, question-making method that combined category-based system with keyword-based system is used for efficient evaluation. Therefore, students can enhance their study achievement through both supplementing their inferior area and preserving their interest area.

A Semantic Video Object Tracking Algorithm Using Contour Refinement (윤곽선 재조정을 통한 의미 있는 객체 추적 알고리즘)

  • Lim, Jung-Eun;Yi, Jae-Youn;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.6
    • /
    • pp.1-8
    • /
    • 2000
  • This paper describes an algorithm for semantic video object tracking using semi automatic method. In the semi automatic method, a user specifies an object of interest at the first frame and then the specified object is to be tracked in the remaining frames. The proposed algorithm consists of three steps: object boundary projection, uncertain area extraction, and boundary refinement. The object boundary is projected from the previous frame to the current frame using the motion estimation. And uncertain areas are extracted via two modules: Me error-test and color similarity test. Then, from extracted uncertain areas, the exact object boundary is obtained by boundary refinement. The simulation results show that the proposed video object extraction method provides efficient tracking results for various video sequences compared to the previous methods.

  • PDF

A Dynamic Segmentation Method for Representative Key-frame Extraction from Video data (동적 분할 기법을 이용한 비디오 데이터의 대표키 프레임 추출)

  • Lee, Soon-Hee;Kim, Young-Hee;Ryu, Keun-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.1
    • /
    • pp.46-57
    • /
    • 2001
  • To access the multimedia data, such as video data with temporal properties, the content-based image retrieval technique is required. Moreover, one of the basic techniques for content-based image retrieval is an extraction of representative key-frames. Not only did we implement this method, but also by analyzing the video data, we have proven the proposed method to be both effective and accurate. In addition, this method is expected to solve the real world problem of building video databases, as it is very useful in building an index.

  • PDF

Violent crowd flow detection from surveillance cameras using deep transfer learning-gated recurrent unit

  • Elly Matul Imah;Riskyana Dewi Intan Puspitasari
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.671-682
    • /
    • 2024
  • Violence can be committed anywhere, even in crowded places. It is hence necessary to monitor human activities for public safety. Surveillance cameras can monitor surrounding activities but require human assistance to continuously monitor every incident. Automatic violence detection is needed for early warning and fast response. However, such automation is still challenging because of low video resolution and blind spots. This paper uses ResNet50v2 and the gated recurrent unit (GRU) algorithm to detect violence in the Movies, Hockey, and Crowd video datasets. Spatial features were extracted from each frame sequence of the video using a pretrained model from ResNet50V2, which was then classified using the optimal trained model on the GRU architecture. The experimental results were then compared with wavelet feature extraction methods and classification models, such as the convolutional neural network and long short-term memory. The results show that the proposed combination of ResNet50V2 and GRU is robust and delivers the best performance in terms of accuracy, recall, precision, and F1-score. The use of ResNet50V2 for feature extraction can improve model performance.

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Improved Quality Keyframe Selection Method for HD Video

  • Yang, Hyeon Seok;Lee, Jong Min;Jeong, Woojin;Kim, Seung-Hee;Kim, Sun-Joong;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3074-3091
    • /
    • 2019
  • With the widespread use of the Internet, services for providing large-capacity multimedia data such as video-on-demand (VOD) services and video uploading sites have greatly increased. VOD service providers want to be able to provide users with high-quality keyframes of high quality videos within a few minutes after the broadcast ends. However, existing keyframe extraction tends to select keyframes whose quality as a keyframe is insufficiently considered, and it takes a long computation time because it does not consider an HD class image. In this paper, we propose a keyframe selection method that flexibly applies multiple keyframe quality metrics and improves the computation time. The main procedure is as follows. After shot boundary detection is performed, the first frames are extracted as initial keyframes. The user sets evaluation metrics and priorities by considering the genre and attributes of the video. According to the evaluation metrics and the priority, the low-quality keyframe is selected as a replacement target. The replacement target keyframe is replaced with a high-quality frame in the shot. The proposed method was subjectively evaluated by 23 votes. Approximately 45% of the replaced keyframes were improved and about 18% of the replaced keyframes were adversely affected. Also, it took about 10 minutes to complete the summary of one hour video, which resulted in a reduction of more than 44.5% of the execution time.

Analysis of Economical Efficiency by the Extraction Method of Road Spatial Information (도로공간정보의 추출방법에 따른 경제성 분석)

  • 이종출;박운용;문두열;서동주
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.527-533
    • /
    • 2004
  • This study has based on RTKGPS and DGPS and Digital Video Camera to 3-dimensional position data of road, as a Road Spatial Information. Economic efficiency analysis was applied to road spatial information system built up by four different methods such as conventional surveying, RTK GPS, DGPS, and Digital Video Camera. As a result of analysis, it was shown conventional surveying 100%, it was shown that about 64% in RTKGPS, it was shown that about 63% in DGPS, it was shown that about 37% in Digital Video Camera cost-saving.

  • PDF

A New Details Extraction Technique for Video Sequence Using Morphological Laplacian (수리형태학적 Laplacian 연산을 이용한 새로운 동영상 Detail 추출 기법)

  • 김희준;어진우
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.911-914
    • /
    • 1998
  • In this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details shouuld be coded in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological laplacian operator and modified post-it transform is very efficient for improving quality of the reconstructed images.

  • PDF

A Study of Real-time SVC Bitstream Extraction for QoS guaranteed Streaming (QoS 기반 스트리밍 서비스를 위한 실시간 SVC 비트스트림 추출기에 대한 연구)

  • Kim, Duck-Yeon;Bae, Tae-Meon;Kim, Young-Suk;Ro, Yong-Man;Choi, Hae-Chul;Kim, Jae-Gon
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.513-516
    • /
    • 2005
  • SVC(Scalable Video coding) is the standardization of MPEG that aims to support multi spatial, temporal, and quality layers. We can support the video service guaranteeing the QoS in varied network circumstance by using SVC bitstream. In this paper, we propose the real-time SVC bitstream extractor that is able to extract the bitstream with varied frame rate and SNR quality in real-time. To do, extraction processing needs to be performed by GOP unit. As well , essential bitstream information for real-time extraction is aquired before doing extraction process. The proposed method is implemented by using JSVM 2.0. Experimental results show that the proposed method is valid.

  • PDF

An Efficient Video Sequence Matching Algorithm (효율적인 비디오 시퀀스 정합 알고리즘)

  • 김상현;박래홍
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.45-52
    • /
    • 2004
  • According tothe development of digital media technologies various algorithms for video sequence matching have been proposed to match the video sequences efficiently. A large number of video sequence matching methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video sequence matching or video shot matching. In this paper, we propose an efficientalgorithm to index the video sequences and to retrieve the sequences for video sequence query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous fames. Several key frame extraction algorithms have been proposed, in which similar methods used for shot boundary detection were employed with proper similarity measures. In this paper, we propose the efficient algorithm to extract key frames using the cumulative Cauchy function measure and. compare its performance with that of conventional algorithms. Video sequence matching can be performed by evaluating the similarity between data sets of key frames. To improve the matching efficiency with the set of extracted key frames we employ the Cauchy function and the modified Hausdorff distance. Experimental results with several color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.