• Title/Summary/Keyword: distance between frames

Search Result 94, Processing Time 0.029 seconds

DISTANCE BETWEEN CONTINUOUS FRAMES IN HILBERT SPACE

  • Amiri, Zahra;Kamyabi-Gol, Rajab Ali
    • Journal of the Korean Mathematical Society
    • /
    • v.54 no.1
    • /
    • pp.215-225
    • /
    • 2017
  • In this paper, we study some equivalence relations between continuous frames in a Hilbert space ${\mathcal{H}}$. In particular, we seek two necessary and sufficient conditions under which two continuous frames are near. Moreover, we investigate a distance between continuous frames in order to acquire the closest and nearest tight continuous frame to a given continuous frame. Finally, we implement these results for shearlet and wavelet frames in two examples.

An Efficient Video Retrieval Algorithm Using Key Frame Matching for Video Content Management

  • Kim, Sang Hyun
    • International Journal of Contents
    • /
    • v.12 no.1
    • /
    • pp.1-5
    • /
    • 2016
  • To manipulate large video contents, effective video indexing and retrieval are required. A large number of video indexing and retrieval algorithms have been presented for frame-wise user query or video content query whereas a relatively few video sequence matching algorithms have been proposed for video sequence query. In this paper, we propose an efficient algorithm that extracts key frames using color histograms and matches the video sequences using edge features. To effectively match video sequences with a low computational load, we make use of the key frames extracted by the cumulative measure and the distance between key frames, and compare two sets of key frames using the modified Hausdorff distance. Experimental results with real sequence show that the proposed video sequence matching algorithm using edge features yields the higher accuracy and performance than conventional methods such as histogram difference, Euclidean metric, Battachaya distance, and directed divergence methods.

An Efficient Video Retrieval Algorithm Using Color and Edge Features

  • Kim Sang-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.1
    • /
    • pp.11-16
    • /
    • 2006
  • To manipulate large video databases, effective video indexing and retrieval are required. A large number of video indexing and retrieval algorithms have been presented for frame-w]so user query or video content query whereas a relatively few video sequence matching algorithms have been proposed for video sequence query. In this paper, we propose an efficient algorithm to extract key frames using color histograms and to match the video sequences using edge features. To effectively match video sequences with low computational load, we make use of the key frames extracted by the cumulative measure and the distance between key frames, and compare two sets of key frames using the modified Hausdorff distance. Experimental results with several real sequences show that the proposed video retrieval algorithm using color and edge features yields the higher accuracy and performance than conventional methods such as histogram difference, Euclidean metric, Battachaya distance, and directed divergence methods.

  • PDF

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Performance Improvement of Speech/Music Discrimination Based on Cepstral Distance (켑스트럼 거리 기반의 음성/음악 판별 성능 향상)

  • Park Seul-Han;Choi Mu Yeol;Kim Hyung Soon
    • MALSORI
    • /
    • no.56
    • /
    • pp.195-206
    • /
    • 2005
  • Discrimination between speech and music is important in many multimedia applications. In this paper, focusing on the spectral change characteristics of speech and music, we propose a new method of speech/music discrimination based on cepstral distance. Instead of using cepstral distance between the frames with fixed interval, the minimum of cepstral distances among neighbor frames is employed to increase discriminability between fast changing music and speech. And, to prevent misclassification of speech segments including short pause into music, short pause segments are excluded from computing cepstral distance. The experimental results show that proposed method yields the error rate reduction of$68\%$, in comparison with the conventional approach using cepstral distance.

  • PDF

A Distance Estimation Method of Object′s Motion by Tracking Field Features and A Quantitative Evaluation of The Estimation Accuracy (배경의 특징 추적을 이용한 물체의 이동 거리 추정 및 정확도 평가)

  • 이종현;남시욱;이재철;김재희
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.621-624
    • /
    • 1999
  • This paper describes a distance estimation method of object's motion in soccer image sequence by tracking field features. And we quantitatively evaluate the estimation accuracy We suppose that the input image sequence is taken with a camera on static axis and includes only zooming and panning transformation between frames. Adaptive template matching is adopted for non-rigid object tracking. For background compensation, feature templates selected from reference frame image are matched in following frames and the matched feature point pairs are used in computing Affine motion parameters. A perspective displacement field model is used for estimating the real distance between two position on Input Image. To quantitatively evaluate the accuracy of the estimation, we synthesized a 3 dimensional virtual stadium with graphic tools and experimented on the synthesized 2 dimensional image sequences. The experiment shows that the average of the error between the actual moving distance and the estimated distance is 1.84%.

  • PDF

A Study on the Moving Distance and Velocity Measurement of 2-D Moving Object Using a Microcomputer (마이크로 컴퓨터를 이용한 2차원 이동물체의 이동거리와 속도측정에 관한 연구)

  • Lee, Joo Shin;Choi, Kap Seok
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.23 no.2
    • /
    • pp.206-216
    • /
    • 1986
  • In this paper, the moving distance and velocity of a single moving object are measured by sampling three frames in a two-dimensional line sequence image. The brightness of each frame is analyzed, and the bit data of their pixel are rearranged so that the difference image may be extracted. The parameters for recognition of the object are the gray level of the object, the number of vertex points and the distance between the vertex points. The moving distance obtained from the coordinate which is constructed by the bit processing of the data in the memory map of a microcomputer, and the moving velocity is obtained from the moving distance and the time interval between the first and second sampled frames.

  • PDF

Content similarity matching for video sequence identification

  • Kim, Sang-Hyun
    • International Journal of Contents
    • /
    • v.6 no.3
    • /
    • pp.5-9
    • /
    • 2010
  • To manage large database system with video, effective video indexing and retrieval are required. A large number of video retrieval algorithms have been presented for frame-wise user query or video content query, whereas a few video identification algorithms have been proposed for video sequence query. In this paper, we propose an effective video identification algorithm for video sequence query that employs the Cauchy function of histograms between successive frames and the modified Hausdorff distance. To effectively match the video sequences with a low computational load, we make use of the key frames extracted by the cumulative Cauchy function and compare the set of key frames using the modified Hausdorff distance. Experimental results with several color video sequences show that the proposed algorithm for video identification yields remarkably higher performance than conventional algorithms such as Euclidean metric, and directed divergence methods.

동영상 처리에 의한 목적물 추출 및 이동 방향과 이동 속도 계측에 관한 연구

  • 이종형;황병원
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1987.04a
    • /
    • pp.56-59
    • /
    • 1987
  • In this study the moving information extraction techniques of moving objects are processed digital imaqe data by sampling three frames in a fixed-bacqround two-dimensional line sequence image the brightness of interframe are compared to extract difference image and difference image are two level formed and neighber averged From neigbber averaged image the parameters for recoqnition of the object are the number of contorur pixels, the number of vertex points and the distance between the vertex points Agtercomparing the same object the moving distance obtained from the coordinate which is constructed by the bit processing of the digital data and the moving velocity is obtained from the moving distance and the time interval between the first andsecond sampled frames.

  • PDF

Shot Boundary Detection Using Global Decision Tree (전역적 결정트리를 이용한 샷 경계 검출)

  • Shin, Seong-Yoon;Moon, Hyung-Yoon;Rhee, Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.75-80
    • /
    • 2008
  • This paper proposes a method to detect scene change using global decision tree that extract boundary cut that have width of big change that happen by camera brake from difference value of frames. First, calculate frame difference value through regional X2-histogram and normalization, next, calculate distance between difference value using normalization. Shot boundary detection is performed by compare global threshold distance with distance value for two adjacent frames that calculating global threshold distance based on distance between calculated difference value. Global decision tree proposed this paper can detect easily sudden scene change such as motion from object or camera and flashlight.

  • PDF