• Title/Summary/Keyword: Scene Transition Detection

Search Result 15, Processing Time 0.017 seconds

Electrooculogram-based Scene Transition Detection for Interactive Video Retrieval (인터랙티브 비디오 검색을 위한 EOG 기반 장면 전환 검출)

  • Lee, Chung-Yeon;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.408-410
    • /
    • 2012
  • 기존의 비디오 검색 방법들은 관련 주석이나 영상 정보에 기반하며 사용자의 반응과 관련하여서는 많은 정보를 활용하고 있지 않다. 비디오 시청시 사용자의 뇌신호나 시선추적 정보 등의 인지적 반응을 이용하여 연속적인 비디오 스트림의 각 부분에 대하여 사용자들이 나타내는 관심이나 감성 정보를 추출한다면 보다 인터랙티브한 비디오 데이터 검색 및 추천이 가능하다. 본 논문에서는 비디오를 시청하는 사용자의 안구전도(electrooculogram)를 기록한 후, 장면 전환이 발생한 부분에서의 사건관련전위 분석을 통해 해당 부분에서 나타나는 특징적 반응을 찾고 이에 대한 인지적 해석을 도출했다. 실험 결과 장면 전환 이후200~700ms 부분에서 P300 성분과 유사한 피크가 발생하는 것을 확인하였으며, 이러한 결과는 장면 전환에 따른 피험자의 비디오 내용 인지에 대한 의도 불일치 및 주의력 증가로 해석된다.

Gradual Scene Transition Detection using Summation of Feature Difference Area (특징값 비유사도 영역의 누적 분포를 이용한 점진적 장면전환 검출)

  • Lee Jong-Myoung;Kim Myoung-Joon;Seo Byeong-Rak;Kim Whoi-Yul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.877-879
    • /
    • 2005
  • 장면전환의 검출은 비디오 브라우징, 검색, 요약 등에 관한 많은 응용에 유용하다. 본 논문에서는 점진적 장면전환 검출을 위해 정의된 N-길이 로컬 윈도우 내에서 비유사도 분포가 갖는 최소값만큼 상승하여 형성되는 분포를 구하고, 분포의 상단이 이루는 비유사도 값을 누적하여 설정된 임계값보다 클 경우 점진적 장면전환으로 판단하는 방법을 제안한다. 장면전환 구간에서 이루는 영역의 누적값은 최소-최대 분포를 이용하여 구할 수 있다. 실험에서 기존의 제안된 방법과 비교를 하였고 그 결과 제안된 방법에서 올바른 장면전환 검출 성능은 낮았으나 잘못 검출되는 장면전환 수는 적은 결과를 보였다. 제안된 방법은 점진적 장면전환 검출을 위한 임계값의 선택이 쉬우며, 장면전환 길이에 크게 의존하지 않는 장점이 있고 수행속도가 높아 실시간으로 처리하는데 적합하다.

  • PDF

Fast image stitching method for handling dynamic object problems in Panoramic Images

  • Abdukholikov, Murodjon;Whangbo, Taegkeun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.11
    • /
    • pp.5419-5435
    • /
    • 2017
  • The construction of panoramic images on smartphones and low-powered devices is a challenging task. In this paper, we propose a new approach for smoothly stitching images on mobile phones in the presence of moving objects in the scene. Our main contributions include handling moving object problems, reducing processing time, and generating rectangular panoramic images. First, unique and robust feature points are extracted using fast ORB method and a feature matching technique is applied to match the extracted feature points. After obtaining good matched feature points, we employ the non-deterministic RANSAC algorithm to discard wrong matches, and the hommography transformation matrix parameters are estimated with the algorithm. Afterward, we determine precise overlap regions of neighboring images and calculate their absolute differences. Then, thresholding operation and noise removal filtering are applied to create a mask of possible moving object regions. Sequentially, an optimal seam is estimated using dynamic programming algorithm, and a combination of linear blending with the mask information is applied to avoid seam transition and ghosting artifacts. Finally, image-cropping operation is utilized to obtain a rectangular boundary image from the stitched image. Experiments demonstrate that our method is able to produce panoramic images quickly despite the existence of moving objects.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Content based Video Segmentation Algorithm using Comparison of Pattern Similarity (장면의 유사도 패턴 비교를 이용한 내용기반 동영상 분할 알고리즘)

  • Won, In-Su;Cho, Ju-Hee;Na, Sang-Il;Jin, Ju-Kyong;Jeong, Jae-Hyup;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1252-1261
    • /
    • 2011
  • In this paper, we propose the comparison method of pattern similarity for video segmentation algorithm. The shot boundary type is categorized as 2 types, abrupt change and gradual change. The representative examples of gradual change are dissolve, fade-in, fade-out or wipe transition. The proposed method consider the problem to detect shot boundary as 2-class problem. We concentrated if the shot boundary event happens or not. It is essential to define similarity between frames for shot boundary detection. We proposed 2 similarity measures, within similarity and between similarity. The within similarity is defined by feature comparison between frames belong to same shot. The between similarity is defined by feature comparison between frames belong to different scene. Finally we calculated the statistical patterns comparison between the within similarity and between similarity. Because this measure is robust to flash light or object movement, our proposed algorithm make contribution towards reducing false positive rate. We employed color histogram and mean of sub-block on frame image as frame feature. We performed the experimental evaluation with video dataset including set of TREC-2001 and TREC-2002. The proposed algorithm shows the performance, 91.84% recall and 86.43% precision in experimental circumstance.