• Title/Summary/Keyword: Automatic video editing

Search Result 8, Processing Time 0.02 seconds

Video Automatic Editing Method and System based on Machine Learning (머신러닝 기반의 영상 자동 편집 방법 및 시스템)

  • Lee, Seung-Hwan;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.235-237
    • /
    • 2022
  • Video content is divided into long-form video content and short-form video content according to the length. Long form video content is created with a length of 15 minutes or longer, and all frames of the captured video are included without editing. Short-form video content can be edited to a shorter length from 1 minute to 15 minutes, and only some frames from the frames of the captured video. Due to the recent growth of the single-person broadcasting market, the demand for short-form video content to increase viewers is increasing. Therefore, there is a need for research on content editing technology for editing and generating short-form video content. This study studies the technology to create short-form videos of main scenes by capturing images, voices, and motions. Short-form videos of key scenes use a pre-trained highlight extraction model through machine learning. An automatic video editing system and method for automatically generating a highlight video is a core technology of short-form video content. Machine learning-based automatic video editing method and system research will contribute to competitive content activities by reducing the effort and cost and time invested by single creators for video editing

  • PDF

News Video Editing System (뉴스비디오 편집시스템)

  • 고경철;이양원
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.421-425
    • /
    • 2000
  • The efficient researching of the News Video is require the development of video processing and editing technology to extract meaningful information from the Video data. The advanced information nations are researching the Video Editing System and recently they are concerned to research the perfect practical system. This paper represents the System that can extract and edit the meaningful information from the Video Data by the User demand through the Scene change detection and Editing system by the automatic/ passive classification and this system represents more efficient scene change detection algorithm which was selected by the user.

  • PDF

Uncertain Region Based User-Assisted Segmentation Technique for Object-Based Video Editing System (객체기반 비디오 편집 시스템을 위한 불확실 영역기반 사용자 지원 비디오 객체 분할 기법)

  • Yu Hong-Yeon;Hong Sung-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.529-541
    • /
    • 2006
  • In this paper, we propose a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the selected objects are continuously separated from the un selected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable and efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on this result, we have developed objects based video editing system with several convenient editing functions.

  • PDF

Automatic Video Editing Technology based on Matching System using Genre Characteristic Patterns (장르 특성 패턴을 활용한 매칭시스템 기반의 자동영상편집 기술)

  • Mun, Hyejun;Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.861-869
    • /
    • 2020
  • We introduce the application that automatically makes several images stored in user's device into one video by using the different climax patterns appearing for each film genre. For the classification of the genre characteristics of movies, a climax pattern model style was created by analyzing the genre of domestic movie drama, action, horror and foreign movie drama, action, and horror. The climax pattern was characterized by the change in shot size, the length of the shot, and the frequency of insert use in a specific scene part of the movie, and the result was visualized. The model visualized by genre developed as a template using Firebase DB. Images stored in the user's device were selected and matched with the climax pattern model developed as a template for each genre. Although it is a short video, it is a feature of the proposed application that it can create an emotional story video that reflects the characteristics of the genre. Recently, platform operators such as YouTube and Naver are upgrading applications that automatically generate video using a picture or video taken by the user directly with a smartphone. However, applications that have genre characteristics like movies or include video-generation technology to show stories are still insufficient. It is predicted that the proposed automatic video editing has the potential to develop into a video editing application capable of transmitting emotions.

Automatic Video Editing Application based on Climax Pattern Classified by Genre (장르별 클라이맥스 패턴 적용 자동 영상편집 어플리케이션)

  • Im, Hyejeong;Mun, Hyejun;Park, Gaeun;Lim, Yangmi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.611-612
    • /
    • 2020
  • 최근 유튜브, 네이버와 같은 플랫폼 사업자들은 다양하고 많은 동영상확보를 위해 최대한 시간을 적게 들이고 좋은 퀄리티의 영상을 자동으로 생성해주는 어플리케이션을 개발하는데 AI 기술을 적극적으로 사용하고 있다. 가장 주도적으로 진행하는 곳은 IBM 의 왓슨의 인지하이라이트 기술이다. 관중의 함성소리와 스포츠특성 데이터들을 활용하여 하이라이트 부분의 영상만 자동 생성하고 있다. 하지만 현재까지의 기술은 인간의 감성을 자극하는 스토리 전개방식의 자동영상 생성에 있어서는 부족한 부분이 많이 존재한다.이 에 본 논문은 영화의 클라이맥스 부분의 영상편집방식을 분석하여 이에 대한 장르별 샷 사이즈 변화패턴을 시각화한 후, 장르간 편집 차이점을 패턴화한 템플릿을 구축하여 사용자의 이미지 데이터들을 장르별 클라이맥스 패턴의 특성에 맞게 추천하여 짧은 영상을 자동 생성하는 어플리케이션을 개발하였다. 향후 본 연구는 1 인 미디어 산업 및 사이버교육 분야에서 가장 많이 소요되는 영상편집 시간을 단축하는데 큰 효율이 있을 것이라 기대한다.

  • PDF

An Automatic Face Hiding System based on the Deep Learning Technology

  • Yoon, Hyeon-Dham;Ohm, Seong-Yong
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.289-294
    • /
    • 2019
  • As social network service platforms grow and one-person media market expands, people upload their own photos and/or videos through multiple open platforms. However, it can be illegal to upload the digital contents containing the faces of others on the public sites without their permission. Therefore, many people are spending much time and effort in editing such digital contents so that the faces of others should not be exposed to the public. In this paper, we propose an automatic face hiding system called 'autoblur', which detects all the unregistered faces and mosaic them automatically. The system has been implemented using the GitHub MIT open-source 'Face Recognition' which is based on deep learning technology. In this system, two dozens of face images of the user are taken from different angles to register his/her own face. Once the face of the user is learned and registered, the system detects all the other faces for the given photo or video and then blurs them out. Our experiments show that it produces quick and correct results for the sample photos.

Soccer Video Highlight Building Algorithm using Structural Characteristics of Broadcasted Sports Video (스포츠 중계 방송의 구조적 특성을 이용한 축구동영상 하이라이트 생성 알고리즘)

  • 김재홍;낭종호;하명환;정병희;김경수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.727-743
    • /
    • 2003
  • This paper proposes an automatic highlight building algorithm for soccer video by using the structural characteristics of broadcasted sports video that an interesting (or important) event (such as goal or foul) in sports video has a continuous replay shot surrounded by gradual shot change effect like wipe. This shot editing rule is used in this paper to analyze the structure of broadcated soccer video and extracts shot involving the important events to build a highlight. It first uses the spatial-temporal image of video to detect wipe transition effects and zoom out/in shot changes. They are used to detect the replay shot. However, using spatial-temporal image alone to detect the wipe transition effect requires too much computational resources and need to change algorithm if the wipe pattern is changed. For solving these problems, a two-pass detection algorithm and a pixel sub-sampling technique are proposed in this paper. Furthermore, to detect the zoom out/in shot change and replay shots more precisely, the green-area-ratio and the motion energy are also computed in the proposed scheme. Finally, highlight shots composed of event and player shot are extracted by using these pre-detected replay shot and zoom out/in shot change point. Proposed algorithm will be useful for web services or broadcasting services requiring abstracted soccer video.

Generating Extreme Close-up Shot Dataset Based On ROI Detection For Classifying Shots Using Artificial Neural Network (인공신경망을 이용한 샷 사이즈 분류를 위한 ROI 탐지 기반의 익스트림 클로즈업 샷 데이터 셋 생성)

  • Kang, Dongwann;Lim, Yang-mi
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.983-991
    • /
    • 2019
  • This study aims to analyze movies which contain various stories according to the size of their shots. To achieve this, it is needed to classify dataset according to the shot size, such as extreme close-up shots, close-up shots, medium shots, full shots, and long shots. However, a typical video storytelling is mainly composed of close-up shots, medium shots, full shots, and long shots, it is not an easy task to construct an appropriate dataset for extreme close-up shots. To solve this, we propose an image cropping method based on the region of interest (ROI) detection. In this paper, we use the face detection and saliency detection to estimate the ROI. By cropping the ROI of close-up images, we generate extreme close-up images. The dataset which is enriched by proposed method is utilized to construct a model for classifying shots based on its size. The study can help to analyze the emotional changes of characters in video stories and to predict how the composition of the story changes over time. If AI is used more actively in the future in entertainment fields, it is expected to affect the automatic adjustment and creation of characters, dialogue, and image editing.