• Title/Summary/Keyword: distance between frames

Search Result 94, Processing Time 0.024 seconds

Detection of Lane Curve Direction by Using Image Processing Based on Neural Network (차선의 회전 방향 인식을 위한 신경회로망 응용 화상처리)

  • 박종웅;장경영;이준웅
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.7 no.5
    • /
    • pp.178-185
    • /
    • 1999
  • Recently, Collision Warning System is developed to improve vehicle safety. This system chiefly uses radar. But the detected vehicle from radar must be decide whether it is the vehicle in the same lane of my vehicle or not. Therefore, Vision System is needed to detect traffic lane. As a preparative step, this study presents the development of algorithm to recognize traffic lane curve direction. That is, the Neural Network that can recognize traffic lane curve direction is constructed by using the information of short distance, middle distance, and decline of traffic lane. For this procedure, the relation between used information and traffic lane curve direction must be analyzed. As the result of application to sampled 2,000 frames, the rate of success is over 90%.t text here.

  • PDF

Detection of Gradual Transitions in MPEG Compressed Video using Hidden Markov Model (은닉 마르코프 모델을 이용한 MPEG 압축 비디오에서의 점진적 변환의 검출)

  • Choi, Sung-Min;Kim, Dai-Jin;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.379-386
    • /
    • 2004
  • Video segmentation is a fundamental task in video indexing and it includes two kinds of shot change detections such as the abrupt transition and the gradual transition. The abrupt shot boundaries are detected by computing the image-based distance between adjacent frames and comparing this distance with a pre-determined threshold value. However, the gradual shot boundaries are difficult to detect with this approach. To overcome this difficulty, we propose the method that detects gradual transition in the MPEG compressed video using the HMM (Hidden Markov Model). We take two different HMMs such as a discrete HMM and a continuous HMM with a Gaussian mixture model. As image features for HMM's observations, we use two distinct features such as the difference of histogram of DC images between two adjacent frames and the difference of each individual macroblock's deviations at the corresponding macroblock's between two adjacent frames, where deviation means an arithmetic difference of each macroblock's DC value from the mean of DC values in the given frame. Furthermore, we obtain the DC sequences of P and B frame by the first order approximation for a fast and effective computation. Experiment results show that we obtain the best detection and classification performance of gradual transitions when a continuous HMM with one Gaussian model is taken and two image features are used together.

Vehicle Tracking using Euclidean Distance (유클리디안 척도를 이용한 차량 추적)

  • Kim, Gyu-Yeong;Kim, Jae-Ho;Park, Jang-Sik;Kim, Hyun-Tae;Yu, Yun-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.6
    • /
    • pp.1293-1299
    • /
    • 2012
  • In this paper, a real-time vehicle detection and tracking algorithms is proposed. The vehicle detection could be processed using GMM (Gaussian Mixture Model) algorithm and mathematical morphological processing with HD CCTV camera images. The vehicle tracking based on separated vehicle object was performed using Euclidean distance between detected object. In more detail, background could be estimated using GMM from CCTV input image signal and then object could be separated from difference image of the input image and background image. At the next stage, candidated objects were reformed by using mathematical morphological processing. Finally, vehicle object could be detected using vehicle size informations dependent on distance and vehicle type in tunnel. The vehicle tracking performed using Euclidean distance between the objects in the video frames. Through computer simulation using recoded real video signal in tunnel, it is shown that the proposed system works well.

Forward Vehicle Movement Estimation Algorithm (전방 차량 움직임 추정 알고리즘)

  • Park, Han-dong;Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1697-1702
    • /
    • 2017
  • This paper proposes a forward vehicle movement estimation algorithm for the image-based forward collision warning. The road region in the acquired image is designated as a region of interest (ROI) and a distance look up table (LUT) is made in advance. The distance LUT shows horizontal and vertical real distances from a reference pixel as a test vehicle position to any pixel as a position of a vehicle on the ROI. The proposed algorithm detects vehicles in the ROI, assigns labels to them, and saves their distance information using the distance LUT. And then the proposed algorithm estimates the vehicle movements such as approach distance, side-approaching and front-approaching velocities using distance changes between frames. In forward vehicle movement estimation test using road driving videos, the proposed algorithm makes the valid estimation of average 98.7%, 95.9%, 94.3% in the vehicle movements, respectively.

A study on extraction of the frames representing each phoneme in continuous speech (연속음에서의 각 음소의 대표구간 추출에 관한 연구)

  • 박찬응;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.174-182
    • /
    • 1996
  • In continuous speech recognition system, it is possible to implement the system which can handle unlimited number of words by using limited number of phonetic units such as phonemes. Dividing continuous speech into the string of tems of phonemes prior to recognition process can lower the complexity of the system. But because of the coarticulations between neiboring phonemes, it is very difficult ot extract exactly their boundaries. In this paper, we propose the algorithm ot extract short terms which can represent each phonemes instead of extracting their boundaries. The short terms of lower spectral change and higher spectral chang eare detcted. Then phoneme changes are detected using distance measure with this lower spectral change terms, and hgher spectral change terms are regarded as transition terms or short phoneme terms. Finally lower spectral change terms and the mid-term of higher spectral change terms are regarded s the represent each phonemes. The cepstral coefficients and weighted cepstral distance are used for speech feature and measuring the distance because of less computational complexity, and the speech data used in this experimetn was recoreded at silent and ordinary in-dorr environment. Through the experimental results, the proposed algorithm showed higher performance with less computational complexity comparing with the conventional segmetnation algorithms and it can be applied usefully in phoneme-based continuous speech recognition.

  • PDF

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Human Tracking Based On Context Awareness In Outdoor Environment

  • Binh, Nguyen Thanh;Khare, Ashish;Thanh, Nguyen Chi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3104-3120
    • /
    • 2017
  • The intelligent monitoring system has been successfully applied in many fields such as: monitoring of production lines, transportation, etc. Smart surveillance systems have been developed and proven effective in some specific areas such as monitoring of human activity, traffic, etc. Most of critical application monitoring systems involve object tracking as one of the key steps. However, task of tracking of moving object is not easy. In this paper, the authors propose a method to implement human object tracking in outdoor environment based on human features in shearlet domain. The proposed method uses shearlet transform which combines the human features with context-sensitiveness in order to improve the accuracy of human tracking. The proposed algorithm not only improves the edge accuracy, but also reduces wrong positions of the object between the frames. The authors validated the proposed method by calculating Euclidean distance and Mahalanobis distance values between centre of actual object and centre of tracked object, and it has been found that the proposed method gives better result than the other recent available methods.

Estimation of Person Height and 3D Location using Stereo Tracking System (스테레오 추적 시스템을 이용한 보행자 높이 및 3차원 위치 추정 기법)

  • Ko, Jung Hwan;Ahn, Sung Soo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.95-104
    • /
    • 2012
  • In this paper, an estimation of person height and 3D location of a moving person by using the pan/tilt-embedded stereo tracking system is suggested and implemented. In the proposed system, face coordinates of a target person is detected from the sequential input stereo image pairs by using the YCbCr color model and phase-type correlation methods and then, using this data as well as the geometric information of the stereo tracking system, distance to the target from the stereo camera and 3-dimensional location information of a target person are extracted. Basing on these extracted data the pan/tilt system embedded in the stereo camera is controlled to adaptively track a moving person and as a result, moving trajectory of a target person can be obtained. From some experiments using 780 frames of the sequential stereo image pairs, it is analyzed that standard deviation of the position displacement of the target in the horizontal and vertical directions after tracking is kept to be very low value of 1.5, 0.42 for 780 frames on average, and error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 0.5% on average. These good experimental results suggest a possibility of implementation of a new stereo target tracking system having a high degree of accuracy and a very fast response time with this proposed algorithm.

Motion Compensated Subband Video Coding with Arbitrarily Shaped Region Adaptivity

  • Kwon, Oh-Jin;Choi, Seok-Rim
    • ETRI Journal
    • /
    • v.23 no.4
    • /
    • pp.190-198
    • /
    • 2001
  • The performance of Motion Compensated Discrete Cosine Transform (MC-DCT) video coding is improved by using the region adaptive subband image coding [18]. On the assumption that the video is acquired from the camera on a moving platform and the distance between the camera and the scene is large enough, both the motion of camera and the motion of moving objects in a frame are compensated. For the compensation of camera motion, a feature matching algorithm is employed. Several feature points extracted using a Sobel operator are used to compensate the camera motion of translation, rotation, and zoom. The illumination change between frames is also compensated. Motion compensated frame differences are divided into three regions called stationary background, moving objects, and newly emerging areas each of which is arbitrarily shaped. Different quantizers are used for different regions. Compared to the conventional MC-DCT video coding using block matching algorithm, our video coding scheme shows about 1.0-dB improvements on average for the experimental video samples.

  • PDF

An Efficient Video Clip Matching Algorithm Using the Cauchy Function (커쉬함수를 이용한 효율적인 비디오 클립 정합 알고리즘)

  • Kim Sang-Hyul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.294-300
    • /
    • 2004
  • According to the development of digital media technologies various algorithms for video clip matching have been proposed to match the video sequences efficiently. A large number of video search methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video clip matching or video shot matching. In this paper, we propose an efficient algorithm to index the video sequences and to retrieve the sequences for video clip query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous frames. Experimental results with color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

  • PDF