• Title/Summary/Keyword: Texture recognition

Search Result 184, Processing Time 0.026 seconds

Morphable Model to Interpolate Difference between Number of Pixels and Number of Vertices (픽셀 수와 정점들 간의 차이를 보완하는 Morphable 모델)

  • Ko, Bang-Hyun;Moon, Hyeon-Joon;Kim, Yong-Guk;Moon, Seung-Bin;Lee, Jong-Weon
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.1-8
    • /
    • 2007
  • The images, which were acquired from various systems such as CCTV and Robot, include many human faces. Because of a rapid increase in visual data, we cannot process these manually; rather we need to do these automatically. Furthermore, companies require automatic security systems to protect their new technology. There are various options available to us, including face recognition, iris recognition and fingerprint recognition. Face recognition is preferable since it does not require direct contact. However, the standard 2-Dimensional method is limited, so Morphable Models may be recommended as an alternative. The original morphable model, made by MPI, contains a large quantity of data such as texture and geometry data. This paper presents a Geometrix-based morphable model designed to reduce this data capacity.

Content-based image retrieval using a fusion of global and local features

  • Hee Hyung Bu;Nam Chul Kim;Sung Ho Kim
    • ETRI Journal
    • /
    • v.45 no.3
    • /
    • pp.505-517
    • /
    • 2023
  • Color, texture, and shape act as important information for images in human recognition. For content-based image retrieval, many studies have combined color, texture, and shape features to improve the retrieval performance. However, there have not been many powerful methods for combining all color, texture, and shape features. This study proposes a content-based image retrieval method that uses the combined local and global features of color, texture, and shape. The color features are extracted from the color autocorrelogram; the texture features are extracted from the magnitude of a complete local binary pattern and the Gabor local correlation revealing local image characteristics; and the shape features are extracted from singular value decomposition that reflects global image characteristics. In this work, an experiment is performed to compare the proposed method with those that use our partial features and some existing techniques. The results show an average precision that is 19.60% higher than those of existing methods and 9.09% higher than those of recent ones. In conclusion, our proposed method is superior over other methods in terms of retrieval performance.

Patch-based Texture Synthesis for Marker Concealment (마커 은닉을 위한 패치 기반 텍스쳐 합성)

  • Yun, Kyung-Dahm;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.11-18
    • /
    • 2007
  • We propose a novel method to conceal fiducial markers observed in augmented scenes using patch-based texture synthesis. Despite the efficiency for simple object recognition and tracking, the markers deliver inherent obtrusiveness. They do not only reduce immersiveness, but also severely degrade usability of augmented reality. The proposed method constructs alternative images in real time to overlay markers present in the sequence of images. The global characteristics of background textures are retained and the results are more adaptive to illumination changes.

  • PDF

Reconstruction of High-Resolution Facial Image Based on A Recursive Error Back-Projection

  • Park, Joeng-Seon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.715-717
    • /
    • 2004
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on a recursive error back-projection of top-down machine learning. A face is represented by a linear combination of prototypes of shape and texture. With the shape and texture information about the pixels in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those of texture by solving least square minimization. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes, In addition to, a recursive error back-projection is applied to improve the accuracy of synthesized high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution one captured at a distance.

  • PDF

Face Detection and Recognition Using Ellipsodal Information and Wavelet Packet Analysis (타원형 정보와 웨이블렛 패킷 분석을 이용한 얼굴 검출 및 인식)

  • 정명호;김은태;박민용
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2327-2330
    • /
    • 2003
  • This paper deals with face detection and recognition using ellipsodal information and wavelet packet analysis. We proposed two methods. First, Face detection method uses general ellipsodal information of human face contour and we find eye position on wavelet transformed face images A novel method for recognition of views of human faces under roughly constant illumination is presented. Second, The proposed Face recognition scheme is based on the analysis of a wavelet packet decomposition of the face images. Each face image is first located and then, described by a subset of band filtered images containing wavelet coefficients. From these wavelet coefficients, which characterize the face texture, the Euclidian distance can be used in order to classify the face feature vectors into person classes. Experimental results are presented using images from the FERET and the MIT FACES databases. The efficiency of the proposed approach is analyzed according to the FERET evaluation procedure and by comparing our results with those obtained using the well-known Eigenfaces method. The proposed system achieved an rate of 97%(MIT data), 95.8%(FERET databace)

  • PDF

Object Recognition for Mobile Robot using Context-based Bi-directional Reasoning (상황 정보 기반 양방향 추론 방법을 이용한 이동 로봇의 물체 인식)

  • Lim, G.H.;Ryu, G.G.;Suh, I.H.;Kim, J.B.;Zhang, G.X.;Kang, J.H.;Park, M.K.
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.6-8
    • /
    • 2007
  • In this paper, We propose reasoning system for object recognition and space classification using not only visual features but also contextual information. It is necessary to perceive object and classify space in real environments for mobile robot. especially vision based. Several visual features such as texture, SIFT. color are used for object recognition. Because of sensor uncertainty and object occlusion. there are many difficulties in vision-based perception. To show the validities of our reasoning system. experimental results will be illustrated. where object and space are inferred by bi -directional rules even with partial and uncertain information. And the system is combined with top-down and bottom-up approach.

  • PDF

Iris Recognition Using Ridgelets

  • Birgale, Lenina;Kokare, Manesh
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.445-458
    • /
    • 2012
  • Image feature extraction is one of the basic works for biometric analysis. This paper presents the novel concept of application of ridgelets for iris recognition systems. Ridgelet transforms are the combination of Radon transforms and Wavelet transforms. They are suitable for extracting the abundantly present textural data that is in an iris. The technique proposed here uses the ridgelets to form an iris signature and to represent the iris. This paper contributes towards creating an improved iris recognition system. There is a reduction in the feature vector size, which is 1X4 in size. The False Acceptance Rate (FAR) and False Rejection Rate (FRR) were also reduced and the accuracy increased. The proposed method also avoids the iris normalization process that is traditionally used in iris recognition systems. Experimental results indicate that the proposed method achieves an accuracy of 99.82%, 0.1309% FAR, and 0.0434% FRR.

A Survey on Image Emotion Recognition

  • Zhao, Guangzhe;Yang, Hanting;Tu, Bing;Zhang, Lei
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1138-1156
    • /
    • 2021
  • Emotional semantics are the highest level of semantics that can be extracted from an image. Constructing a system that can automatically recognize the emotional semantics from images will be significant for marketing, smart healthcare, and deep human-computer interaction. To understand the direction of image emotion recognition as well as the general research methods, we summarize the current development trends and shed light on potential future research. The primary contributions of this paper are as follows. We investigate the color, texture, shape and contour features used for emotional semantics extraction. We establish two models that map images into emotional space and introduce in detail the various processes in the image emotional semantic recognition framework. We also discuss important datasets and useful applications in the field such as garment image and image retrieval. We conclude with a brief discussion about future research trends.

A Vehicular License Plate Recognition Framework For Skewed Images

  • Arafat, M.Y.;Khairuddin, A.S.M.;Paramesran, R.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5522-5540
    • /
    • 2018
  • Vehicular license plate (LP) recognition system has risen as a significant field of research recently because various explorations are currently being conducted by the researchers to cope with the challenges of LPs which include different illumination and angular situations. This research focused on restricted conditions such as using image of only one vehicle, stationary background, no angular adjustment of the skewed images. A real time vehicular LP recognition scheme is proposed for the skewed images for detection, segmentation and recognition of LP. In this research, a polar co-ordinate transformation procedure is implemented to adjust the skewed vehicular images. Besides that, window scanning procedure is utilized for the candidate localization that is based on the texture characteristics of the image. Then, connected component analysis (CCA) is implemented to the binary image for character segmentation where the pixels get connected in an eight-point neighbourhood process. Finally, optical character recognition is implemented for the recognition of the characters. For measuring the performance of this experiment, 300 skewed images of different illumination conditions with various tilt angles have been tested. The results show that proposed method able to achieve accuracy of 96.3% in localizing, 95.4% in segmenting and 94.2% in recognizing the LPs with an average localization time of 0.52s.

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.