• Title/Summary/Keyword: scene change

Search Result 398, Processing Time 0.026 seconds

Scene Change Detection Robust to Video Distortion using SIFT (SIFT를 이용한 영상 변형에 강인한 장면 전환 검출)

  • Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.118-119
    • /
    • 2019
  • 본 논문에서는 비디오 제작 및 유통의 활성화에 따라 필요성이 높아지고 있는 장면 전환을 검출하는 방법을 제안한다. 유통 과정에서 해상도 변환, 자막 삽입, 압축, 영상 반전 등의 다양한 변형이 추가되더라도 동일하게 장면 전환을 검출해야 하므로 전처리 과정과 SIFT를 이용한 특징 추출, 변형을 고려한 매칭 방법을 이용하여 프레임 간의 매칭률을 계산한다. 또한 매칭률의 임계값을 기준으로 장면 전환 여부를 판단한다. 원본 비디오에서의 특징을 가지고 다양한 변형이 가해진 비디오에서의 특징과 매칭률을 계산하여 유효성을 판단한다.

  • PDF

Case study of Creating CG Handheld Steadicam using maya nParticle

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • v.10 no.3
    • /
    • pp.157-162
    • /
    • 2021
  • With the recent increase in YouTube content, many YouTubers are shooting with a handheld camera. Audiences are increasingly accustomed to the movement of handheld cameras. As the camera moves faster than the camera movement of the old movies, and the camera moves splendidly to the music of the music video, the camera movement in CG animation is also needed to change. The handheld Steadicam creates a natural camera movement by compensating so that the screen does not vibrate significantly even when the vibration is large and by minimizing rotation. In order to implement such camera movement, we tried to make a handheld Steadicam using nParticle simulation of Maya software and apply it to the scene to verify whether it is possible to implement the necessary natural and various movement.

A study on the applicability of interactive technology in VR video content production

  • Liu, Miaoyihai;Chung, Jeanhun
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.71-76
    • /
    • 2022
  • The continuous development of virtual reality technology in the last five years has brought about a big change in the future film industry. Interactive VR movies using virtual reality technology in movies showed the result of increasing the immersion of the audience due to the characteristics of interaction. This will provide a unique opportunity for a new experience of immersion in various forms of cinema in the near future. In this paper, the interaction of narrative VR movies was studied as an example of the movie , which won the [The Best VR Experience Award] at the Venice International Film Festival, In future development, improve the scene transition, Dizziness, Ways of interaction and other questions, let the audience increase the sense of participation, immersion and curiosity when watching movies, and make watching movies a more interesting thing in life.

Scene Change Detection Based on SURF (SURF 기반의 장면 전환 검출 방법)

  • Oh, Hyunju;Park, Jiyong;Hong, Seokmin;Kang, Hyunmin
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.637-639
    • /
    • 2022
  • 장면 전환 검출을 위해 히스토그램 등 단일 특성만 고려할 경우 움직임이 많은 영상과 조명의 변화가 다양하거나 단조로운 색상으로의 장면 전환이 일어나는 영상에 검출이 어렵다는 단점이 있다. 이를 개선하기 위해 프레임 간 색상 히스토그램을 비교 후, SURF를 적용하는 방법을 제안한다.

웹 환경에서 가상현실 콘텐츠 제작을 위한 편집 도구 개발 연구

  • Hyun-been Kim;Jun-sung Min
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.293-295
    • /
    • 2022
  • 기존 가상현실(Virtual reality) 콘텐츠는 제작인 완료된 상태로 제공되기 때문에 법 개정 및 교육 커리큘럼 변화에 능동적으로 대응하지 못한다. 따라서 VR 콘텐츠 제작시 편집이 가능한 툴과 편집된 VR 콘텐츠가 바로 실행되는 편집 도구를 구현한다면 훈련 내역의 변경이 지속적으로 발생되는 VR 콘텐츠에 대안책이 될 것이다. 따라서 이를 해결하기 위해 VR 콘텐츠의 주요 생성 및 생성된 시나리오의 세부 씬(Scene) 구성을 사용자가 직접 생성하는 편집 도구를 구현하고 VR 장치가 없는 운용 환경에서도 실행이 가능 하도록 Web 기반의 편집 도구를 개발하여 VR 콘텐츠의 편집 및 실행 시 동시처리가 가능하도록 편집 도구를 개발하였다.

  • PDF

Implementation of a Robust Visual Surveillance System for the Variation of Illumination Lights (조명광 변화에 강인한 영상 감시시스템 구현)

  • Jung, Yong-Bae;Kim, Jung-Hyeon;Kim, Tae-Hyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.3
    • /
    • pp.517-525
    • /
    • 2006
  • In this paper, the algorithm which improve the efficiency of surveillance in spite of the change of light is proposed and confirmed by virtue of the experiments. One of the problems for the implementation of visual surveillance system is the image processing technique to overcome with the variations of illumination lights. Some conventional systems are generally not considered the error due to the change of lights because the system use at indoor. In practical, the factors of bad image can be classified to the ghosts due to the reflection of lights and shadows in a scene. Especially weak images and noises at night are decreased the performance of visual surveillance system. In the paper, the filter which improve the images with some change of illumination lights is designed and the gabor filter is used for recognition and tracking of the moving objects. In the results, the system showed that the recognition and tracking were obtained $92\sim100%$ of recognition rate at daytime, but $80\sim90%$ of nighttime.

A Study on the Change Process of Students' Perception and Expression About Distance and Speed in Distance Function and Speed Function (거리함수와 속력함수에서, 거리와 속력의 관계에 대한 학생들의 인식과 표현의 변화과정에 대한 연구)

  • Lee, Dong Gun;Ahn, Sang Jin;Kim, Suk Hui;Shin, Jae Hong
    • School Mathematics
    • /
    • v.18 no.4
    • /
    • pp.881-901
    • /
    • 2016
  • This study is about investigating students' recognition and expression on relationship of 'time, distance, speed' via teaching experiment. In this process, students showed not only a change in perception of the relationship of 'time, distance, speed' but also recognizing the average speed as a viewpoint of the slope of the line connecting the end points of the interval in the distance function as well as another way of perceiving average speed of a height of a rectangle. In this process, the study shows the scene of expanding the relation of 'distance = time ${\times}$ speed' to 'distance = time ${\times}$ average speed', and also the student who makes the continuous reasoning shows the possibility of constructing a new function that can explain the change of the primitive function by allocating the average rate of change to the interval. Although this study was conducted with a limited number of students, this study suggests some implications through the observation of relationship of 'time, distance, speed' the students'. We hope that these results will be the starting point for various studies for constructing the integral learning model in the future.

Automatic Change Detection of MODIS NDVI using Artificial Neural Networks (신경망을 이용한 MODIS NDVI의 자동화 변화탐지 기법)

  • Jung, Myung-Hee
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.83-89
    • /
    • 2012
  • Natural Vegetation cover, which is very important earth resource, has been significantly altered by humans in some manner. Since this has currently resulted in a significant effect on global climate, various studies on vegetation environment including forest have been performed and the results are utilized in policy decision making. Remotely sensed data can detect, identify and map vegetation cover change based on the analysis of spectral characteristics and thus are vigorously utilized for monitoring vegetation resources. Among various vegetation indices extracted from spectral reponses of remotely sensed data, NDVI is the most popular index which provides a measure of how much photosynthetically active vegetation is present in the scene. In this study, for change detection in vegetation cover, a Multi-layer Perceptron Network (MLPN) as a nonparametric approach has been designed and applied to MODIS/Aqua vegetation indices 16-day L3 global 250m SIN Grid(v005) (MYD13Q1) data. The feature vector for change detection is constructed with the direct NDVI diffenrence at a pixel as well as the differences in some subset of NDVI series data. The research covered 5 years (2006-20110) over Korean peninsular.

An Efficient Walkthrough from Two Images using Spidery Mesh Interface and View Morphing (Spidery 매쉬 인터페이스와 뷰 모핑을 이용한 두 이미지로부터의 효율적인 3차원 애니메이션)

  • Cho, Hang-Shin;Kim, Chang-Hun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.2
    • /
    • pp.132-140
    • /
    • 2001
  • This paper proposes an efficient walktlu-ough animation from two images of the same scene. To make animation easily and fast, Tour Into the Picture(TIP) enables walkthrough animation from single image but lacks the reality of its foreground object when the viewpoint moves from side to side, and view morphing uses only 2D transition between two images but restricts its camera path on the line between two views. By combining advantages of these two image-based techniques, this paper suggests a new virtual navigation technique which enable natural scene transformation when the viewpoint changes in the side-to-side direction as well as in the depth direction. In our method, view morphing is employed only in foreground objects , and background scene which is perceived carelessly is mapped into cube-like 3D model as in TIP, so as to save laborious 3D reconstruction costs and improve visual realism simultaneously. To do this, we newly define a camera transformation between two images from the relationship of the spidery mesh transformation and its corresponding 3D view change. The result animation shows that our method creates a realistic 3D virtual navigation using a simple interface.

  • PDF

Emotion-based Video Scene Retrieval using Interactive Genetic Algorithm (대화형 유전자 알고리즘을 이용한 감성기반 비디오 장면 검색)

  • Yoo Hun-Woo;Cho Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.514-528
    • /
    • 2004
  • An emotion-based video scene retrieval algorithm is proposed in this paper. First, abrupt/gradual shot boundaries are detected in the video clip representing a specific story Then, five video features such as 'average color histogram' 'average brightness', 'average edge histogram', 'average shot duration', and 'gradual change rate' are extracted from each of the videos and mapping between these features and the emotional space that user has in mind is achieved by an interactive genetic algorithm. Once the proposed algorithm has selected videos that contain the corresponding emotion from initial population of videos, feature vectors from the selected videos are regarded as chromosomes and a genetic crossover is applied over them. Next, new chromosomes after crossover and feature vectors in the database videos are compared based on the similarity function to obtain the most similar videos as solutions of the next generation. By iterating above procedures, new population of videos that user has in mind are retrieved. In order to show the validity of the proposed method, six example categories such as 'action', 'excitement', 'suspense', 'quietness', 'relaxation', 'happiness' are used as emotions for experiments. Over 300 commercial videos, retrieval results show 70% effectiveness in average.