• Title/Summary/Keyword: Scene changing detection

Search Result 8, Processing Time 0.023 seconds

Shortcut Shot Detection Based on Compressed Video Bitstream

  • Ryu, Kwang-Ryol;Kim, Young-Bin
    • Journal of information and communication convergence engineering
    • /
    • v.5 no.3
    • /
    • pp.269-272
    • /
    • 2007
  • The shortcut shot detection based on MPEG compressed video bitstream is presented in this paper. The detection algorithm is used the video picture frame from MPEG compressed video directly not to be decompressed the original image. For shortcut detection, I and P frame of MPEG video bitstream are classified. The changing scene cuts at I pictures are detected by the decompressed DC image and scene cuts at P picture frame by monitoring the percentage of intra-macroblocks per P picture are detected. Experimental results using test video bitstream QVGA results in average 92% detection rate, searching time is taken around 4.5 times faster in comparison with changing scene shot detection algorithm which is decompressed the compressed bitstream.

Changing Scene Detection using Histogram and Header Information of H.264 Video Stream (H.264 비디오 스트림의 히스토그램 및 헤더 정보를 이용한 장면 전환 검출에 관한 연구)

  • Kim Young-Bin;Sclabassi Robert J.;Ryu Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.197-200
    • /
    • 2006
  • A scene changing detection using histogram and header information of H.264 video stream is presented in this paper. The method using histogram is normal to be detect the changing scene. But this technique results in a lot of processing time because video data is compressed and decompressed to video stream and calculated the difference of histogram between scenes on the each frame. The method using H.264 header information is available to detect the scene change at real time without the process of calculation. Histogram and header information is more rapid for scene change detection with being the same performance in precision and recall.

  • PDF

Video Shot Detection Based on Video Frame Types (비디오 프레임 타입을 이용한 비디오 셧 검출)

  • Kim, Young-Bin;Ryu, Kwang-Ryol;Sclabassi, Robert J.
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.145-148
    • /
    • 2007
  • The video shot detection based on video picture type is presented in this paper. The detection algorithm is used MPEG compressed video frame directly, not reconstructed the original image. For shot detection, I and P frame of MPEG video bit stream are classified. The detecting scene cuts at I pictures are detected by reconstructed DC image. While scene cuts at P picture frame by monitoring the percentage of Intra-macroblocks per P picture. Experimental results on the test video bit stream is shown the detection rate of $85\sim98%$ and searching time is 4 times faster than the previously known video shot detection algorithm on the decompressed video shot.

  • PDF

A Scene Boundary Detection Scheme using Audio Information in MPEG System Stream (MPEG 시스템 스트림상에서 오디오 정보를 이용한 장면 경계 검출 방법)

  • Kim, Jae-Hong;Nang, Jong-Ho;Park, Soo-Yong
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.864-876
    • /
    • 2000
  • This paper proposes a new scene boundary detection scheme for the MPEG System stream using MPEG Audio information and proves its usefulness by extensive experiments. A scene boundary has a characteristic that the audio as well as video information are changed rapidly. This paper first classifies this scene boundary into three cases ; Radical, Gradual, Micro Changes, with respect to the audio changes. The Radical change has a large-scale changing of decibel value and pitch value at a scene boundary, the Gradual change shows the long-time transition of decibel and pitch values from max to min or vice versa, and the Micro change displays a some change of pitch or frequency distribution without decibel changes. Upon this analysis, a new scene change detection algorithm detecting these three cases is proposed in which a progressive window with a time line is used to trace the changes in the audio information. Some experiments with various movies show that proposed algorithm could produce a high detection ratio for Radical change that is the most popular scene change in the movies, while producing a moderate detection ratio for Gradual and Micro changes. The proposed scene boundary detection scheme could be used to build a database for visual information like MPEG System stream.

  • PDF

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.

Feature-based Image Analysis for Object Recognition on Satellite Photograph (인공위성 영상의 객체인식을 위한 영상 특징 분석)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.35-43
    • /
    • 2007
  • This paper presents a system for image matching and recognition based on image feature detection and description techniques from artificial satellite photographs. We propose some kind of parameters from the varied environmental elements happen by image handling process. The essential point of this experiment is analyzes that affects match rate and recognition accuracy when to change of state of each parameter. The proposed system is basically inspired by Lowe's SIFT(Scale-Invariant Transform Feature) algorithm. The descriptors extracted from local affine invariant regions are saved into database, which are defined by k-means performed on the 128-dimensional descriptor vectors on an artificial satellite photographs from Google earth. And then, a label is attached to each cluster of the feature database and acts as guidance for an appeared building's information in the scene from camera. This experiment shows the various parameters and compares the affected results by changing parameters for the process of image matching and recognition. Finally, the implementation and the experimental results for several requests are shown.

  • PDF

Lane Detection-based Camera Pose Estimation (차선검출 기반 카메라 포즈 추정)

  • Jung, Ho Gi;Suhr, Jae Kyu
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.5
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF