• Title/Summary/Keyword: Image sequence

Search Result 988, Processing Time 0.031 seconds

An image Analysis Technique Using Integral Projections in Object-Oriented Analysis-Synthesis Coding (물체지향 분석 및 합성 부호화에서 가산 투영을 이용한 영상분석기법)

  • 김준석;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.8
    • /
    • pp.87-98
    • /
    • 1994
  • Object-oriented analysis-synthesis coding subdivides each image of a sequence into moving objects and compensates the motion of each object. Thus it can reconstruct real motion better than conventional motion-compensated coding techniques at very-low-bit-rates. It uses a mapping parameter technique for estimating motion information of each object. Since a mapping parameter technique uses gradient operators it is sensitive to redundant details and noise. To accurately determine mapping parameters, we propose a new analysis method using integral projections for estimation of gradient values. Also to reconstruct correctly the local motion the proposed algorithm divides an image into segmented objects each of which having uniform motion information while the conventional one assumes a large object having the same motion information. Computer simulation results with several test sequences show that the proposed image analysis method in object-oriented analysis-synthesis coding shows better performance than the conventional one.

  • PDF

A New Vehicle Detection Method based on Color Integral Histogram

  • Hwang, Jae-Pil;Ryu, Kyung-Jin;Park, Seong-Keun;Kim, Eun-Tai;Kang, Hyung-Jin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.4
    • /
    • pp.248-253
    • /
    • 2008
  • In this paper, a novel vehicle detection algorithm is proposed that utilizes the color histogram of the image. The color histogram is used to search the image for regions with shadow, block symmetry, and block non-homogeneity, thereby detecting the vehicle region. First, an integral histogram of the input image is computed to decrease the amount of required computation time for the block color histograms. Then, shadow detection is performed and the block symmetry and block non-homogeneity are checked in a cascade manner to detect the vehicle in the image. Finally, the proposed scheme is applied to both still images taken in a parking lot and an on-road video sequence to demonstrate its effectiveness.

Optic Flow for Motion Vision;Survey (이동 물체 인식을 위한 Optic Flow)

  • 이종수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.11 no.1
    • /
    • pp.1-15
    • /
    • 1986
  • Optic flow is 2D velocity projected on the image plane of 3D velocity of a moving surface element. In this paper, we survey techniques computing optic flows from an image time sequence of moving objects and techniques determining 3D velocities and surface structures of the moving objects from the optic flows determined.

  • PDF

Characteristics of Image Sticking Observed During Background Display in AC-PDP (AC PDP의 배경광 잔상특성)

  • 류재화;임성현;김동현;김중균;이호준;박정후
    • The Transactions of the Korean Institute of Electrical Engineers C
    • /
    • v.53 no.2
    • /
    • pp.91-96
    • /
    • 2004
  • In darkroom condition, it was observed that a white picture pattern lasted several minutes leaves a recognizable trace in subsequent black background picture. Although this is not a serious problem for the most current public display or home TV applications, the image sticking should be minimized for future high quality multimedia display applications. In order to characterize this picture memory effect having relatively long time scale, spatially resolved luminance measurement and light waveform measurement have been performed. Pixels located at the outer boundary of white pattern previously displayed shows highest luminance. These cells also shows fastest ignition at the ramp up reset sequence. The luminance and ignition voltage differences between boundary cells and the other cells are increased with display duration and number of sustain-pulse. It is speculated that image sticking observed at the boundary cell is originated from the transport of charged particles and re-deposition of reactive species such as Mg, O provided from strong sustain discharge region.

A study on Face Image Classification for Efficient Face Detection Using FLD

  • Nam, Mi-Young;Kim, Kwang-Baek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05a
    • /
    • pp.106-109
    • /
    • 2004
  • Many reported methods assume that the faces in an image or an image sequence have been identified and localization. Face detection from image is a challenging task because of variability in scale, location, orientation and pose. In this paper, we present an efficient linear discriminant for multi-view face detection. Our approaches are based on linear discriminant. We define training data with fisher linear discriminant to efficient learning method. Face detection is considerably difficult because it will be influenced by poses of human face and changes in illumination. This idea can solve the multi-view and scale face detection problem poses. Quickly and efficiently, which fits for detecting face automatically. In this paper, we extract face using fisher linear discriminant that is hierarchical models invariant pose and background. We estimation the pose in detected face and eye detect. The purpose of this paper is to classify face and non-face and efficient fisher linear discriminant..

  • PDF

An Iterated Optical Flow Estimation Method for Automatically Tracking and Positioning Homologous Points in Video Image Sequences

  • Tsay, Jaan-Rong;Lee, I-Chien
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.372-374
    • /
    • 2003
  • The optical flow theory can be utilized for automatically tracking and positioning homologous points in digital video (DV) image sequences. In this paper, the Lucas-Kanade optical flow estimation (LKOFE) method and the normalized cross-correlation (NCC) method are compared and analyzed using the DV image sequences acquired by our SONY DCRPC115 DV camera. Thus, an improved optical flow estimation procedure, called 'Iterated Optical Flow Estimation (IOFE)', is presented. Our test results show that the trackable range of 3${\sim}$4 pixels in the LKOFE procedure can be apparently enlarged to 30 pixels in the IOFE.

  • PDF

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Real time detection and recognition of traffic lights using component subtraction and detection masks (성분차 색분할과 검출마스크를 통한 실시간 교통신호등 검출과 인식)

  • Jeong Jun-Ik;Rho Do-Whan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.65-72
    • /
    • 2006
  • The traffic lights detection and recognition system is an essential module of the driver warning and assistance system. A method which is a color vision-based real time detection and recognition of traffic lights is presented in this paper This method has four main modules : traffic signals lights detection module, traffic lights boundary candidate determination module, boundary detection module and recognition module. In traffic signals lights detection module and boundary detection module, the color thresholding and the subtraction value of saturation and intensity in HSI color space and detection probability mask for lights detection are used to segment the image. In traffic lights boundary candidate determination module, the detection mask of traffic lights boundary is proposed. For the recognition module, the AND operator is applied to the results of two detection modules. The input data for this method is the color image sequence taken from a moving vehicle by a color video camera. The recorded image data was transformed by zooming function of the camera. And traffic lights detection and recognition experimental results was presented in this zoomed image sequence.

Wavelet-based Digital watermarking Using Multiple threshold (다중 임계치를 적용한 웨이브릿 기반 디지털 워터마킹 기법)

  • Kim, Jae-Won;Nam, Jae-Yeal
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.419-428
    • /
    • 2003
  • Recently, digital watermarking has been proposed as a viable solution to the need of copyright protection and authentication of multimedia data. A robust wavelet-based watermark casting scheme and a watermark retrieval technique are suggested in this paper. We present a method which can add the watermark to the significant coefficients in the DWT domain, and does not require the original image in the detection process. In adaptive watermark casting method is developed to select perceptually significant coefficients for each subband using multiple threshold. In the proposed method, an adaptive multiple threshold scheme is used to reflect characteristics of each subband and complexity of image. The watermark is adaptively weighted in different subbands to achieve robustness as well as high perceptual quality. The watermark, Gaussian random sequence is added to the large coefficients but not in the lowest subband in the DWT domain. Experimental results show that the proposed algorithm produced visually very good watermarked image which has good invisibility to human eyes and very robust against various image processing and compression attacks.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF