• Title/Summary/Keyword: Image Feature Vector

Search Result 499, Processing Time 0.025 seconds

Image Retrieval Using Spatial Color Correlation and Texture Characteristics Based on Local Fourier Transform (색상의 공간적인 상관관계와 국부적인 푸리에 변환에 기반한 질감 특성을 이용한 영상 검색)

  • Park, Ki-Tae;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.10-16
    • /
    • 2007
  • In this paper, we propose a technique for retrieving images using spatial color correlation and texture characteristics based on local fourier transform. In order to retrieve images, two new descriptors are proposed. One is a color descriptor which represents spatial color correlation. The other is a descriptor combining the proposed color descriptor with texture descriptor. Since most of existing color descriptors including color correlogram which represent spatial color correlation considered just color distribution between neighborhood pixels, the structural information of neighborhood pixels is not considered. Therefore, a novel color descriptor which simultaneously represents spatial color distribution and structural information is proposed. The proposed color descriptor represents color distribution of Min-Max color pairs calculating color distance between center pixel and neighborhood pixels in a block with 3x3 size. Also, the structural information which indicates directional difference between minimum color and maximum color is simultaneously considered. Then new color descriptor(min-max color correlation descriptor, MMCCD) containing mean and variance values of each directional difference is generated. While the proposed color descriptor includes by far smaller feature vector over color correlogram, the proposed color descriptor improves 2.5 % ${\sim}$ 13.21% precision rate, compared with color correlogram. In addition, we propose a another descriptor which combines the proposed color descriptor and texture characteristics based on local fourier transform. The combined method reduces size of feature vector as well as shows improved results over existing methods.

Robust Face Recognition based on 2D PCA Face Distinctive Identity Feature Subspace Model (2차원 PCA 얼굴 고유 식별 특성 부분공간 모델 기반 강인한 얼굴 인식)

  • Seol, Tae-In;Chung, Sun-Tae;Kim, Sang-Hoon;Chung, Un-Dong;Cho, Seong-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.35-43
    • /
    • 2010
  • 1D PCA utilized in the face appearance-based face recognition methods such as eigenface-based face recognition method may lead to less face representative power and more computational cost due to the resulting 1D face appearance data vector of high dimensionality. To resolve such problems of 1D PCA, 2D PCA-based face recognition methods had been developed. However, the face representation model obtained by direct application of 2D PCA to a face image set includes both face common features and face distinctive identity features. Face common features not only prevent face recognizability but also cause more computational cost. In this paper, we first develope a model of a face distinctive identity feature subspace separated from the effects of face common features in the face feature space obtained by application of 2D PCA analysis. Then, a novel robust face recognition based on the face distinctive identity feature subspace model is proposed. The proposed face recognition method based on the face distinctive identity feature subspace shows better performance than the conventional PCA-based methods (1D PCA-based one and 2D PCA-based one) with respect to recognition rate and processing time since it depends only on the face distinctive identity features. This is verified through various experiments using Yale A and IMM face database consisting of face images with various face poses under various illumination conditions.

Object Detection and Tracking using Bayesian Classifier in Surveillance (서베일런스에서 베이지안 분류기를 이용한 객체 검출 및 추적)

  • Kang, Sung-Kwan;Choi, Kyong-Ho;Chung, Kyung-Yong;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.297-302
    • /
    • 2012
  • In this paper, we present a object detection and tracking method based on image context analysis. It is robust from the image variations such as complicated background, dynamic movement of the object. Image context analysis is carried out using the hybrid network of k-means and RBF. The proposed object detection employs context-driven adaptive Bayesian framework to relive the effect due to uneven object images. The proposed method used feature vector generator using 2D Haar wavelet transform and the Bayesian discriminant method in order to enhance the speed of learning. The system took less time to learn, and learning in a wide variety of data showed consistent results. After we developed the proposed method was applied to real-world environment. As a result, in the case of the object to detect pass outside expected area or other changes in the uncertain reaction showed that stable. The experimental results show that the proposed approach can achieve superior performance using various data sets to previously methods.

Digital Mapping Based on Digital Ortho Images (수치정사투영영상을 이용한 수치지도제작)

  • 이재기;박경식
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2000
  • In the recent day, the necessity and the effective usage are increased rapidly, and it is applied in many other fields as well as in the field of ortho-photo map. In this study, we extract each objects on the aerial image and automatically classify graphic information to produce digital map using only digital ortho-image without particular drawing devices for producing digital map. For this purpose, we have applied a lot of the image processing techniques and fuzzy theory, classified outline and lane of road and building, and had each layer according to each feature. Especially, in the case of the building, the outer vector lines extracted by pixel unit at the building were very complex, but we have developed the program to be expressed by I-dimensional linear type between building corners. In the result of this study, we could not extract and recognize all of the object on the image all together, but we have got the error within 50cm using semi-automatic technique. Therefore, this method will be used effectively in producing 1/5,000 digital map.

  • PDF

Fake Face Detection System Using Pupil Reflection (동공의 반사특징을 이용한 얼굴위조판별 시스템)

  • Yang, Jae-Jun;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.645-651
    • /
    • 2010
  • Recently the need for advanced security technologies are increasing as the occurrence of intelligent crime is growing fastly. Previous liveness detection methods are required for the improvement of accuracy in order to be put to practical use. In this paper, we propose a new fake image detection method using pupil reflection. The proposed system detects eyes based on multi-scale Gabor feature vector in the first stage, and uses template matching technique in oreder to increase the detection accuracy in the second stage. The template matching plays a role in determining the allowed eye area. The infrared image that is reflected in the pupil is used to decide whether or not the captured image is fake. Experimental results indicate that the proposed method is superior to the previous methods in the detection accuracy of fake images.

Recognition of Occluded Face (가려진 얼굴의 인식)

  • Kang, Hyunchul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.6
    • /
    • pp.682-689
    • /
    • 2019
  • In part-based image representation, the partial shapes of an object are represented as basis vectors, and an image is decomposed as a linear combination of basis vectors where the coefficients of those basis vectors represent the partial (or local) feature of an object. In this paper, a face recognition for occluded faces is proposed in which face images are represented using non-negative matrix factorization(NMF), one of part-based representation techniques, and recognized using an artificial neural network technique. Standard NMF, projected gradient NMF and orthogonal NMF were used in part-based representation of face images, and their performances were compared. Learning vector quantizer were used in the recognizer where Euclidean distance was used as the distance measure. Experimental results show that proposed recognition is more robust than the conventional face recognition for the occluded faces.

Deep Learning Similarity-based 1:1 Matching Method for Real Product Image and Drawing Image

  • Han, Gi-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.59-68
    • /
    • 2022
  • This paper presents a method for 1:1 verification by comparing the similarity between the given real product image and the drawing image. The proposed method combines two existing CNN-based deep learning models to construct a Siamese Network. After extracting the feature vector of the image through the FC (Fully Connected) Layer of each network and comparing the similarity, if the real product image and the drawing image (front view, left and right side view, top view, etc) are the same product, the similarity is set to 1 for learning and, if it is a different product, the similarity is set to 0. The test (inference) model is a deep learning model that queries the real product image and the drawing image in pairs to determine whether the pair is the same product or not. In the proposed model, through a comparison of the similarity between the real product image and the drawing image, if the similarity is greater than or equal to a threshold value (Threshold: 0.5), it is determined that the product is the same, and if it is less than or equal to, it is determined that the product is a different product. The proposed model showed an accuracy of about 71.8% for a query to a product (positive: positive) with the same drawing as the real product, and an accuracy of about 83.1% for a query to a different product (positive: negative). In the future, we plan to conduct a study to improve the matching accuracy between the real product image and the drawing image by combining the parameter optimization study with the proposed model and adding processes such as data purification.

Image Based Damage Detection Method for Composite Panel With Guided Elastic Wave Technique Part I. Damage Localization Algorithm (복합재 패널에서 유도 탄성파를 이용한 이미지 기반 손상탐지 기법 개발 Part I. 손상위치 탐지 알고리즘)

  • Kim, Changsik;Jeon, Yongun;Park, Jungsun;Cho, Jin Yeon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.49 no.1
    • /
    • pp.1-12
    • /
    • 2021
  • In this paper, a new algorithm is proposed to estimate the damage location in the composite panel by extracting the elastic wave signal reflected from the damaged area. The guided elastic wave is generated by a piezoelectric actuator and sensed by a piezoelectric sensor. The proposed algorithm adopts a diagnostic approach. It compares the non-damaged signal with the damaged signal, and extract damage information along with sensor network and lamb wave group velocity estimated by signal correlation. However, it is difficult to clearly distinguish the damage location due to the nonlinear properties of lamb wave and complex information composed of various signals. To overcome this difficulty, the cumulative summation feature vector algorithm(CSFV) and a visualization technique are newly proposed in this paper. CSFV algorithm finds the center position of the damage by converting the signals reflected from the damage to the area of distance at which signals reach, and visualization technique is applied that expresses feature vectors by multiplying damage indexes. Experiments are performed for a composite panel and comparative study with the existing algorithms is carried out. From the results, it is confirmed that the damage location can be detected by the proposed algorithm with more reliable accuracy.

The Method of Wet Road Surface Condition Detection With Image Processing at Night (영상처리기반 야간 젖은 노면 판별을 위한 방법론)

  • KIM, Youngmin;BAIK, Namcheol
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.3
    • /
    • pp.284-293
    • /
    • 2015
  • The objective of this paper is to determine the conditions of road surface by utilizing the images collected from closed-circuit television (CCTV) cameras installed on roadside. First, a technique was examined to detect wet surfaces at nighttime. From the literature reviews, it was revealed that image processing using polarization is one of the preferred options. However, it is hard to use the polarization characteristics of road surface images at nighttime because of irregular or no light situations. In this study, we proposes a new discriminant for detecting wet and dry road surfaces using CCTV image data at night. To detect the road surface conditions with night vision, we applied the wavelet packet transform for analyzing road surface textures. Additionally, to apply the luminance feature of night CCTV images, we set the intensity histogram based on HSI(Hue Saturation Intensity) color model. With a set of 200 images taken from the field, we constructed a detection criteria hyperplane with SVM (Support Vector Machine). We conducted field tests to verify the detection ability of the wet road surfaces and obtained reliable results. The outcome of this study is also expected to be used for monitoring road surfaces to improve safety.

Improved SIM Algorithm for Contents-based Image Retrieval (내용 기반 이미지 검색을 위한 개선된 SIM 방법)

  • Kim, Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.2
    • /
    • pp.49-59
    • /
    • 2009
  • Contents-based image retrieval methods are in general more objective and effective than text-based image retrieval algorithms since they use color and texture in search and avoid annotating all images for search. SIM(Self-organizing Image browsing Map) is one of contents-based image retrieval algorithms that uses only browsable mapping results obtained by SOM(Self Organizing Map). However, SOM may have an error in selecting the right BMU in learning phase if there are similar nodes with distorted color information due to the intensity of light or objects' movements in the image. Such images may be mapped into other grouping nodes thus the search rate could be decreased by this effect. In this paper, we propose an improved SIM that uses HSV color model in extracting image features with color quantization. In order to avoid unexpected learning error mentioned above, our SOM consists of two layers. In learning phase, SOM layer 1 has the color feature vectors as input. After learning SOM Layer 1, the connection weights of this layer become the input of SOM Layer 2 and re-learning occurs. With this multi-layered SOM learning, we can avoid mapping errors among similar nodes of different color information. In search, we put the query image vector into SOM layer 2 and select nodes of SOM layer 1 that connects with chosen BMU of SOM layer 2. In experiment, we verified that the proposed SIM was better than the original SIM and avoid mapping error effectively.

  • PDF