• Title/Summary/Keyword: 영상 특징추출

Search Result 2,340, Processing Time 0.031 seconds

Content-based Image Retrieval using Color Correlogram from a Segmented Image (분할된 영상에서의 칼라 코렐로그램을 이용한 내용기반 영상검색)

  • An, Myung-Seok;Cho, Seok-Je
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.10
    • /
    • pp.507-512
    • /
    • 2001
  • Recently, there has been studied on feature extraction method for efficient content-based image retrieval. Especially, many researchers have been studying on extracting from color information, because of its advantages. This paper proposes a feature and its extraction method based on color information in an image. The proposed method is computed from the image segmented into two parts: the complex part and the plan part. Our experiments show that the performance of the proposed method is better as compared with the original color correlogram method.

  • PDF

Design and Implementation of a Virtual Image Insertion System with a Sports Field Model (경기장 모델을 이용한 가상 영상 삽입 시스템의 설계 및 구현)

  • Yoo, Seong;Han, Song-Yi;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.391-393
    • /
    • 2001
  • 본 논문에서 제안하는 가상 영상 삽입 시스템은 카메라의 조작이나 시스템 운영자의 개입 없이 모든 처리 과정이 자동으로 진행된다. 이를 위해 시스템은 경기장 좌표계를 정의하고 삽입할 영상의 크기와 위치를 정하는 과정, 경기장의 특징점들을 추출하는 과정, 경기장 좌표계와 참조 영상의 특징점들로부터 투영 변환 파라미터를 추출하는 과정, 실제 동영상에서 삽입 위치를 찾고 추적하여 가상 영상을 삽입하는 과정을 거치게 된다. 본 논문에서 제안한 시스템의 성능을 검증하기 위해 방송용 NTSC 비디오 데이터를 대상으로 실험을 하였으며 그 결과 각 모듈들과 시스템이 효율적임을 입증하였다.

  • PDF

A Study on Object Finding Method by Using Genetic Algorithm and hybrid Features (유전 알고리즘과 다중 특징 사용에 의한 물체 추출 방법에 관한 연구)

  • 안명석;신현욱;조석제
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.184-188
    • /
    • 1999
  • 최근 산업이 발달함에 따라 영상처리 기술이 산업에 많이 응용되고 있으며, 특히 비젼 어플리케이 션과 여러 멀티미디어 어플리케이션 분야에서, 주어진 영상에서 원하는 물체에 대한 위치정보를 빠른 시간으로 검출하는 방법에 관한 연구가 많이 진행되고 있다 특히 CCD카메라로부터 얻어진 영상 정보를 이용하여 물체의 위치정보와 물체의 패턴 분류 및 특징 추출 등 여러 가지로 응용하고 있다. 물체의 위치를 검출함에 있어서 최근까지의 방법들은 원하는 물체를 찾기 위하여 영상의 모든 부분을 비교 영역으로 정하여 물체를 찾는 방법을 이용하고 있다. 본 논문에서는 주어진 영상에서 물체를 찾기 위해 모든 부분을 비교하지 않고, 유전자 알고리즘과 칼라 히스토그램 인터섹션을 이용하여 물체의 대략의 위치를 찾고 그 주변에서 인접 색 히스토그램으로 물체를 정교하게 찾는 방법을 제안하였다. 제안한 방법은 인접 색 히스토그램과 칼라 히스토그램으로 단순히 영상의 모든 영역을 비교하는 방법에 비해 비교 횟수를 크게 줄이면서 원하는 물체의 위치를 정확히 찾을 수 있음을 알수 있었다.

  • PDF

Face Recognition based on SURF Interest Point Extraction Algorithm (SURF 특징점 추출 알고리즘을 이용한 얼굴인식 연구)

  • Kang, Min-Ku;Choo, Won-Kook;Moon, Seung-Bin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.3
    • /
    • pp.46-53
    • /
    • 2011
  • This paper proposes a SURF (Speeded Up Robust Features) based face recognition method which is one of typical interest point extraction algorithms. In general, SURF based object recognition is performed in interest point extraction and matching. In this paper, although, proposed method is employed not only in interest point extraction and matching, but also in face image rotation and interest point verification. image rotation is performed to increase the number of interest points and interest point verification is performed to find interest points which were matched correctly. Although proposed SURF based face recognition method requires more computation time than PCA based one, it shows better recognition rate than PCA algorithm. Through this experimental result, I confirmed that interest point extraction algorithm also can be adopted in face recognition.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Automatic Co-registration of Cloud-covered High-resolution Multi-temporal Imagery (구름이 포함된 고해상도 다시기 위성영상의 자동 상호등록)

  • Han, You Kyung;Kim, Yong Il;Lee, Won Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.101-107
    • /
    • 2013
  • Generally the commercial high-resolution images have their coordinates, but the locations are locally different according to the pose of sensors at the acquisition time and relief displacement of terrain. Therefore, a process of image co-registration has to be applied to use the multi-temporal images together. However, co-registration is interrupted especially when images include the cloud-covered regions because of the difficulties of extracting matching points and lots of false-matched points. This paper proposes an automatic co-registration method for the cloud-covered high-resolution images. A scale-invariant feature transform (SIFT), which is one of the representative feature-based matching method, is used, and only features of the target (cloud-covered) images within a circular buffer from each feature of reference image are used for the candidate of the matching process. Study sites composed of multi-temporal KOMPSAT-2 images including cloud-covered regions were employed to apply the proposed algorithm. The result showed that the proposed method presented a higher correct-match rate than original SIFT method and acceptable registration accuracies in all sites.

Extraction of Iris Codes for Personal Identification Using an Iris Image (홍채를 이용한 생체인식 코드 추출)

  • Yang, Woo Suk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.6
    • /
    • pp.1-7
    • /
    • 2008
  • In this paper, we introduce a new technology to extract the unique features from an iris image, which uses scale-space filtering. Resulting iris code can be used to develop a system for rapid and automatic human identification with high reliability and confidence levels. First, an iris part is separated from the whole image and the radius and center of the iris are evaluated. Next, the regions that have a high possibility of being noise are discriminated and the features presented in the highly detailed pattern are then extracted. In order to conserve the original signal while minimizing the effect of noise, scale-space filtering is applied. Experiments are performed using a set of 272 iris images taken from 18 persons. Test results show that the iris feature patterns of different persons are clearly discriminated from those of the same person.

  • PDF

Gesture Recognition using MHI Shape Information (MHI의 형태 정보를 이용한 동작 인식)

  • Kim, Sang-Kyoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.1-13
    • /
    • 2011
  • In this paper, we propose a gesture recognition system to recognize motions using the shape information of MHI (Motion History Image). The system acquires MHI to provide information on motions from images with input and extracts the gradient images from such MHI for each X and Y coordinate. It extracts the shape information by applying the shape context to each gradient image and uses the extracted pattern information values as the feature values. It recognizes motions by learning and classifying the obtained feature values with a SVM (Support Vector Machine) classifier. The suggested system is able to recognize the motions for multiple people as well as to recognize the direction of movements by using the shape information of MHI. In addition, it shows a high ratio of recognition with a simple method to extract features.

Fuzzy Model-Based Emotion Recognition Using Color Image (퍼지 모델을 기반으로 한 컬러 영상에서의 감성 인식)

  • Joo, Young-Hoon;Jeong, Keun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.330-335
    • /
    • 2004
  • In this paper, we propose the technique for recognizing the human emotion by using the color image. To do so, we first extract the skin color region from the color image by using HSI model. Second, we extract the face region from the color image by using Eigenface technique. Third, we find the man's feature points(eyebrows, eye, nose, mouse) from the face image and make the fuzzy model for recognizing the human emotions (surprise, anger, happiness, sadness) from the structural correlation of man's feature points. And then, we infer the human emotion from the fuzzy model. Finally, we have proven the effectiveness of the proposed method through the experimentation.

Fuzzy Classifier and Bispectrum for Invariant 2-D Shape Recognition (2차원 불변 영상 인식을 위한 퍼지 분류기와 바이스펙트럼)

  • 한수환;우영운
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.3
    • /
    • pp.241-252
    • /
    • 2000
  • In this paper, a translation, rotation and scale invariant system for the recognition of closed 2-D images using the bispectrum of a contour sequence and a weighted fuzzy classifier is derived and compared with the recognition process using one of the competitive neural algorithm, called a LVQ( Loaming Vector Quantization). The bispectrum based on third order cumulants is applied to the contour sequences of an image to extract fifteen feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to the represent two-dimensional planar images and are fed into a weighted fuzzy classifier. The experimental processes with eight different shapes of aircraft images are presented to illustrate a relatively high performance of the proposed recognition system.

  • PDF