• 제목/요약/키워드: scene change

검색결과 398건 처리시간 0.033초

Sketch-based Image Retrieval System using Optimized Specific Region (최적화된 특정 영역을 이용한 스케치 기반 영상 검색 시스템)

  • Ko Kwang-Hoon;Kim Nac-Woo;Kim Tae-Eun;Choi Jong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제30권8C호
    • /
    • pp.783-792
    • /
    • 2005
  • This paper proposes a feature extraction method for sketch-based image retrieval of animation character. We extract the specific regions using the detection of scene change and correlation points between two frames, and the property of animation production. We detect the area of focused similar colors in extracted specific region. And it is used as feature descriptor for image retrieval that focused color(FC) of regions, size, relation between FCs. Finally, an user can retrieve the similar character using property of animation production and user's sketch as a query Image.

Variable Dynamic Threshold Method for Video Cut Detection (동영상 컷 검출을 위한 가변형 동적 임계값 기법)

  • 염성주;김우생
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제27권4A호
    • /
    • pp.356-363
    • /
    • 2002
  • Video scene segmentation is fundamental role for content based video analysis and many kinds of scene segmentation schemes have been proposed in previous researches. However, there is a problem, which is to find optimal threshold value according to various kinds of movies and its content because only fixed single threshold value usually used for cut detection. In this paper, we proposed the variable dynamic threshold method, which change the threshold value by a probability distribution of cut detection interval and information of frame feature differences and cut detection interval in previous cut detection is used to determine the next cut detection. For this, we present a cut detection algorithm and a parameter generation method to change the threshold value in runtime. We also show the proposed method, which can minimize fault alarm rate than the existing methods efficiently by experimental results.

Use of Frame in Todd Haynes Films (토드 헤인즈 작품에 나타난 프레임 사용)

  • Yoon, Soo-In
    • The Journal of the Korea Contents Association
    • /
    • 제19권12호
    • /
    • pp.633-646
    • /
    • 2019
  • This paper analyzes two films on the theme of love, and , which were released in 2002 and 2015 respectively, directed by Tod Haynes. Both movies have similar narratives of obstacles and overcoming of love, and deal with love that is not accepted by society. Two films conclude differently under similar situations, where the author's change can be observed. The purpose of this study is to examine the author's arguments and the process of change through comparison with the original novels and image expression method used in the films, especially focusing on the variations of characters in the original novels and Mise-en-scene and framing methods that visually depict the obstacles to the mind and love of lovers.

Scene Change Detection in MPEG-1 Video Stream using MAcroblock Information (매크로블록 정보를 이용한 MPEG-1 비디오 스트림의 장면 변화검출)

  • Im, Yeong-In;Nang, Jong-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제26권4호
    • /
    • pp.527-537
    • /
    • 1999
  • 비디오 데이터를 이용한 응용 프로그램을 개발하기위해서 비디오 데이터베이슬ㄹ 구축하고자하는 경우에는 비디오의 내용(Content)에 따라 자동으로 장면 변화를 검출(Scene Change Detection)하는 기술이 필요하다. 본 논문에서는 MPEG-1 형식으로 저장된 비디오 데이터에 대하여 장면의 변화를 자동적으로 검출할 수 있는 방법을 제안하고 실험을 통하여 그 유용성을 보인다. 제안한 검출 방법에서는 B 프레임의 각 매크로 블록들에 대하여 시간적으로 과거 B 프레임의 대응되는 매크로블록의 타입과 비교를 하고, 이런 각 매크로블록들에 대한 비교 결과의 합이 입계치보다 큰 경우에 장면이 변한 것으로 판단한다. 제안한 방법에서는 입력 비디오 스트림에서 B 프레임의 매크로블록층 정보만을 이용하여 I프레임과 P 프레임의 장면 변화 검출도 가능하므로 정교한 검출이 가능하다. 또한 이런 검출 방법은 단순히 한 B 프레임안의 매크로 블록개수만을 조사하여 장면 변화여부를 검출하는 기존의 방법에 대하여 각 매크로블록의 타입 정보뿐만 아니라 위치 정보도 이용하기 때문에 장면 변화 검출이 견고하다. MPEG-1 형식으로 부호화한 뉴스 및 영화 비디오 데이터에 대한 실험에 의하면, 본 논문에서 제안한 검출 방법은 95% 이상의 정확성을 보임을 알 수있다. 본 논문에서 제안한 MPEG-1 비디오 장면변화검출방법은 MPEG-1 형식의 비디오 데이터를 이용한 디지털 라이브러리 등의 구축등에 유용하게 사용될수 있을 것이다.

Fast Game Encoder Based on Scene Descriptor for Gaming-on-Demand Service (주문형 게임 서비스를 위한 장면 기술자 기반 고속 게임 부호화기)

  • Jeon, Chan-Woong;Jo, Hyun-Ho;Sim, Dong-Gyu
    • Journal of Korea Multimedia Society
    • /
    • 제14권7호
    • /
    • pp.849-857
    • /
    • 2011
  • Gaming on demand(GOD) makes people enjoy games by encoding and transmitting game screen at a server side, and decoding the video at a client side. In this paper, we propose a fast game video encoder for multiple users over network with low-powered devices. In the proposed system, the computational complexity of game encoders is reduced by using scene descriptors, which consists of an object motion vector, global motion, and scene change. With additional information from game engines, the proposed encoder does not need to perform various complexity processes such as motion estimation and ratedistortion optimization. The motion estimation and rate-distortion optimization skipped by scene descriptors. We found that the proposed method improved 192 % in terms of FPS, compared with x264 software. With partial assembly code, we also improved coding speed by 86 % in terms of FPS. We found that the proposed fast encoder could encode over 60 FPS for real-time GOD applications.

Aerial Scene Labeling Based on Convolutional Neural Networks (Convolutional Neural Networks기반 항공영상 영역분할 및 분류)

  • Na, Jong-Pil;Hwang, Seung-Jun;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • 제19권6호
    • /
    • pp.484-491
    • /
    • 2015
  • Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.

Effective Detection Techniques for Gradual Scene Changes on MPEG Video (MPEG 영상에서의 점진적 장면전환에 대한 효과적인 검출 기법)

  • 윤석중;지은석;김영로;고성제
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제24권8B호
    • /
    • pp.1577-1585
    • /
    • 1999
  • In this paper, we propose detection methods for gradual scene changes such as dissolve, pan, and zoom. The proposal method to detect a dissolve region uses scene features based on spatial statistics of the image. The spatial statistics to define shot boundaries are derived from squared means within each local area. We also propose a method of the camera motion detection using four representative motion vectors in the background. Representative motion vectors are derived from macroblock motion vectors which are directly extracted from MPEG streams. To reduce the implementation time, we use DC sequences rather than fully decoded MPEG video. In addition, to detect the gradual scene change region precisely, we use all types of the MPEG frames(I, P, B frame). Simulation results show that the proposed detection methods perform better than existing methods.

  • PDF

The Slope Extraction and Compensation Based on Adaptive Edge Enhancement to Extract Scene Text Region (장면 텍스트 영역 추출을 위한 적응적 에지 강화 기반의 기울기 검출 및 보정)

  • Back, Jaegyung;Jang, Jaehyuk;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • 제18권4호
    • /
    • pp.777-785
    • /
    • 2017
  • In the modern real world, we can extract and recognize some texts to get a lot of information from the scene containing them, so the techniques for extracting and recognizing text areas from a scene are constantly evolving. They can be largely divided into texture-based method, connected component method, and mixture of both. Texture-based method finds and extracts text based on the fact that text and others have different values such as image color and brightness. Connected component method is determined by using the geometrical properties after making similar pixels adjacent to each pixel to the connection element. In this paper, we propose a method to adaptively change to improve the accuracy of text region extraction, detect and correct the slope of the image using edge and image segmentation. The method only extracts the exact area containing the text by correcting the slope of the image, so that the extracting rate is 15% more accurate than MSER and 10% more accurate than EEMSER.

Cut and Fade Detection of Scene Change Using Wavelet transform (웨이블렛 변환을 적용한 장면전환의 cut과 fade검출)

  • 이명은;박종현;박순영;방만원;조완현
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 2000년도 제13회 신호처리 합동 학술대회 논문집
    • /
    • pp.207-210
    • /
    • 2000
  • 본 논문에서는 신호를 해석하는데 유용한 웨이블렛 변환을 적용하여 장면전환 요소 중 cut과 fade를 검출하는 알고리즘을 제안한다. 제안된 방법은 웨이블렛 저대역 부밴드로부터 각 프레임의 히스토그램을 구한 후 이전 프레임과 현재 프레임사이의 히스토그램 차를 구하여 이 값이 임계값 이상이면 급격한 장면전환(abrut shot transition)인 cut으로 분류한다. 다음으로 페이드인(fade in)이나 페이드 아웃(fade out)등 컷의 지점이 불분명한 점진적 장면전환(gradual scene transition)을 검출하기 위하여 고대역 부밴드에서 추출한 에지성분에 모멘트를 계산하여 인접한 프레임 사이의 변동율을 분석하여 값이 증가하면 페이드 인을 검출하고 반면에 감소하면 페이드 아웃을 검출하게된다. 성능평가를 위하여 실제의 비디오 분할에 적용한 결과 웨이블렛 적용 방법론이 매우 높은 Precision을 갖는다는 것을 알 수 있으며 윤곽정보에 모멘트 정보를 더함으로써 기존의 방법보다 정확한 페이드(fade) 구간을 검출할 수 있었다.

  • PDF

Edit Method Using Representative Frame on Video (비디오에서의 대표 프레임을 이용한 편집기법)

  • 유현수;이지현
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국해양정보통신학회 1999년도 추계종합학술대회
    • /
    • pp.420-423
    • /
    • 1999
  • In this paper, we propose the method which efficiently obtain information through edit and retrieval of video data easily and rapidly. To support this method, extract the candidate representative frame using existing scene change detection method and the user selects representative frame for video segmentation at his desire, and then visualization indexing methods supported by logical-links enable users to freely merge and split each scene.

  • PDF