• 제목/요약/키워드: Video extraction

검색결과 466건 처리시간 0.021초

MobileNetV3 전이학습 기반 스포츠 비디오 클립 추출 구현 (Implementation of Sports Video Clip Extraction Based on MobileNetV3 Transfer Learning)

  • 위리
    • 한국전자통신학회논문지
    • /
    • 제17권5호
    • /
    • pp.897-904
    • /
    • 2022
  • 스포츠 영상은 중요한 정보 자원에 속하여 있고 정확다가 높게 스포츠 영상 속에 유효 클립을 추출할 수 있어서 코치를 잘 보조하여 영상에서 선수들의 동작을 분석하며 사용자가 더 직관적으로 선수들의 타격 자세를 감상할 수 있다. 현재 스포츠 영상 클립 추출된 결과가 주관이 뚜렷하고 업무량이 많고 저효율 등 결함에 대해 MobileNetV3을 기반으로 스포츠 비디오 클립 분류 방법을 제시하였고 사용자의 시간이 절약하게 한다. 실험이 추출된 유효 클립에 대한 유효성 평가를 진행했으며 추출된 클립에서 유효적인 비율은 97.0%로 자지해서 유효 클립이 추출된 결과는 양호를 밝히는 동시 후속 배드민턴 동작의 원본 영상 데이터 집합의 구성을 위한 기초를 다진다.

New Framework for Automated Extraction of Key Frames from Compressed Video

  • Kim, Kang-Wook;Kwon, Seong-Geun
    • 한국멀티미디어학회논문지
    • /
    • 제15권6호
    • /
    • pp.693-700
    • /
    • 2012
  • The effective extraction of key frames from a video stream is an essential task for summarizing and representing the content of a video. Accordingly, this paper proposes a new and fast method for extracting key frames from a compressed video. In the proposed approach, after the entire video sequence has been segmented into elementary content units, called shots, key frame extraction is performed by first assigning the number of key frames to each shot, and then distributing the key frames over the shot using a probabilistic approach to locate the optimal position of the key frames. The main advantage of the proposed method is that no time-consuming computations are needed for distributing the key frames within the shots and the procedure for key frame extraction is completely automatic. Furthermore, the set of key frames is independent of any subjective thresholds or manually set parameters.

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • 제33권4호
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.

A New Framework for Automatic Extraction of Key Frames Using DC Image Activity

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권12호
    • /
    • pp.4533-4551
    • /
    • 2014
  • The effective extraction of key frames from a video stream is an essential task for summarizing and representing the content of a video. Accordingly, this paper proposes a new and fast method for extracting key frames from a compressed video. In the proposed approach, after the entire video sequence has been segmented into elementary content units, called shots, key frame extraction is performed by first assigning the number of key frames to each shot, and then distributing the key frames over the shot using a probabilistic approach to locate the optimal position of the key frames. Moreover, we implement our proposed framework in Android to confirm the validity, availability and usefulness. The main advantage of the proposed method is that no time-consuming computations are needed for distributing the key frames within the shots and the procedure for key frame extraction is completely automatic. Furthermore, the set of key frames is independent of any subjective thresholds or manually set parameters.

비디오에서 객체의 시공간적 연속성과 움직임을 이용한 동적 객체추출에 관한 연구 (A Study on the Extraction of the dynamic objects using temporal continuity and motion in the Video)

  • 박창민
    • 디지털산업정보학회논문지
    • /
    • 제12권4호
    • /
    • pp.115-121
    • /
    • 2016
  • Recently, it has become an important problem to extract semantic objects from videos, which are useful for improving the performance of video compression and video retrieval. In this thesis, an automatic extraction method of moving objects of interest in video is suggested. We define that an moving object of interest should be relatively large in a frame image and should occur frequently in a scene. The moving object of interest should have different motion from camera motion. Moving object of interest are determined through spatial continuity by the AMOS method and moving histogram. Through experiments with diverse scenes, we found that the proposed method extracted almost all of the objects of interest selected by the user but its precision was 69% because of over-extraction.

H.264/AVC로 압축된 비디오로부터 시그너쳐 추출방법 (Signature Extraction Method from H.264 Compressed Video)

  • 김성민;권용광;원치선
    • 대한전자공학회논문지SP
    • /
    • 제46권3호
    • /
    • pp.10-17
    • /
    • 2009
  • 본 논문은 H.264/AVC 비디오 압축영역에서 비디오 복제 방지기법의 일종인 CBCD(Content Based Copy Detection)에 사용 될 수 있는 비디오 시그너쳐 (Video Signature) 추출 방법을 제안한다. 기존의 비디오 시그너쳐 추출방법은 모두 비디오 공간영 역에서 수행되기 때문에 압축된 비디오 스트립으로부터 시그너쳐를 추출하기 위해서는 비디오를 모두 복호해야 하는 단점을 가지고 있었다. 하지만 제안하는 방법에서는 비디오 압축영역에서 섬네일(Thumbnail)을 빠르게 구성하고 구성된 섬네일을 이용하여 비디오 시그너쳐를 추출하여 이와 같은 단점을 극복하였다. 밝기 순서 정보를 추출하는 실험결과로부터 제안하는 방법은 기존의 방법보다 80.98%의 정확도를 유지하면서 약 2.8배 빠르게 시그너쳐를 추출할 수 있었다.

2-모드 선택 기반의 압축비디오 신호의 움직임 객체 블록 추출 (Moving Object Block Extraction for Compressed Video Signal Based on 2-Mode Selection)

  • 김동욱
    • 한국컴퓨터정보학회논문지
    • /
    • 제12권5호
    • /
    • pp.163-170
    • /
    • 2007
  • 본 논문에서는 압축된 비디오 신호의 움직임 벡터 및 DCT 계수로부터 움직임 객체를 추출하는 새로운 기법을 제시한다. 움직임 객체 추출에 관한 기술은 내용 기반 검색, 타겟트래킹 등 다양한 분야에서 필요로 한다. 움직임 객체 블록의 추출을 위해서 움직임 벡터와 DCT계수 가 선택적으로 이용되는 2-모드 방식의 기법이 제시된다. 또한, 제시된 기법은 DCT 변환 영역상의 계수들만을 이용하기 때문에 완전히 복호화된 정보를 필요로 하지 않는 장점을 갖는다. 제시된 기법을 바탕으로 몇 가지 테스터 영상에 대해 모의 실험을 실시한 결과 양호한 결과를 얻을 수 있었다.

  • PDF

동영상에서 배경프레임을 이용한 차량 프레임 검출 (Car Frame Extraction using Background Frame in Video)

  • 남석우;오해석
    • 정보처리학회논문지B
    • /
    • 제10B권6호
    • /
    • pp.705-710
    • /
    • 2003
  • 본 연구는 동영상으로부터 내용기반 검색을 위하여 동영상의 연속된 프레임간의 영상의 내용 변화를 찾아내어 프레임의 시간정보와 번호판 프레임 영상을 통하여 얻어진 정보를 데이터베이스화하여 검색하는 시스템을 제안한다. 얻어진 동영상을 배경프레임과 처리프레임의 비교영역의 영상의 특징정보를 비교하여 원하는 프레임을 찾는다. 차량의 통과 시간과 차량의 번호판 프레임을 자동으로 추출하여 동영상을 내용과 함께 저장하여 원하는 차량의 동영상 부분을 보여주는 웹에서의 검색시스템이다. 이는 교통정보를 구축 동영상이 포함하고 있는 내용 즉 통과 차량의 정보를 제공할 수 있게 된다.

Video Segmentation and Key frame Extraction using Multi-resolution Analysis and Statistical Characteristic

  • Cho, Wan-Hyun;Park, Soon-Young;Park, Jong-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • 제10권2호
    • /
    • pp.457-469
    • /
    • 2003
  • In this paper, we have proposed the efficient algorithm that can segment the video scene change using a various statistical characteristics obtained from by applying the wavelet transformation for each frames. Our method firstly extracts the histogram features from low frequency subband of wavelet-transformed image and then uses these features to detect the abrupt scene change. Second, it extracts the edge information from applying the mesh method to the high frequency subband of transformed image. We quantify the extracted edge information as the values of variance characteristic of each pixel and use these values to detect the gradual scene change. And we have also proposed an algorithm how extract the proper key frame from segmented video scene. Experiment results show that the proposed method is both very efficient algorithm in segmenting video frames and also is to become the appropriate key frame extraction method.

Study on 3 DoF Image and Video Stitching Using Sensed Data

  • Kim, Minwoo;Chun, Jonghoon;Kim, Sang-Kyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권9호
    • /
    • pp.4527-4548
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from inertia sensors to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw, pitch, and roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data. In addition, the stitching accuracy of video data was improved using the same sensed data, with discrete calculation of homograph matrix. The experimental results for stitching accuracies and speed using sensed data are presented in this paper.