• Title/Summary/Keyword: color image segmentation

Search Result 411, Processing Time 0.033 seconds

A Robust Face Detection Method Based on Skin Color and Edges

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.141-156
    • /
    • 2013
  • In this paper we propose a method to detect human faces in color images. Many existing systems use a window-based classifier that scans the entire image for the presence of the human face and such systems suffers from scale variation, pose variation, illumination changes, etc. Here, we propose a lighting insensitive face detection method based upon the edge and skin tone information of the input color image. First, image enhancement is performed, especially if the image is acquired from an unconstrained illumination condition. Next, skin segmentation in YCbCr and RGB space is conducted. The result of skin segmentation is refined using the skin tone percentage index method. The edges of the input image are combined with the skin tone image to separate all non-face regions from candidate faces. Candidate verification using primitive shape features of the face is applied to decide which of the candidate regions corresponds to a face. The advantage of the proposed method is that it can detect faces that are of different sizes, in different poses, and that are making different expressions under unconstrained illumination conditions.

Human Face Detection from Still Image using Neural Networks and Adaptive Skin Color Model (신경망과 적응적 스킨 칼라 모델을 이용한 얼굴 영역 검출 기법)

  • 손정덕;고한석
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.579-582
    • /
    • 1999
  • In this paper, we propose a human face detection algorithm using adaptive skin color model and neural networks. To attain robustness in the changes of illumination and variability of human skin color, we perform a color segmentation of input image by thresholding adaptively in modified hue-saturation color space (TSV). In order to distinguish faces from other segmented objects, we calculate invariant moments for each face candidate and use the multilayer perceptron neural network of backpropagation algorithm. The simulation results show superior performance for a variety of poses and relatively complex backgrounds, when compared to other existing algorithm.

  • PDF

Object-based Image Classification by Integrating Multiple Classes in Hue Channel Images (Hue 채널 영상의 다중 클래스 결합을 이용한 객체 기반 영상 분류)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.2011-2025
    • /
    • 2021
  • In high-resolution satellite image classification, when the color values of pixels belonging to one class are different, such as buildings with various colors, it is difficult to determine the color information representing the class. In this paper, to solve the problem of determining the representative color information of a class, we propose a method to divide the color channel of HSV (Hue Saturation Value) and perform object-based classification. To this end, after transforming the input image of the RGB color space into the components of the HSV color space, the Hue component is divided into subchannels at regular intervals. The minimum distance-based image classification is performed for each hue subchannel, and the classification result is combined with the image segmentation result. As a result of applying the proposed method to KOMPSAT-3A imagery, the overall accuracy was 84.97% and the kappa coefficient was 77.56%, and the classification accuracy was improved by more than 10% compared to a commercial software.

Fast Text Line Segmentation Model Based on DCT for Color Image (컬러 영상 위에서 DCT 기반의 빠른 문자 열 구간 분리 모델)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.6
    • /
    • pp.463-470
    • /
    • 2010
  • We presented a very fast and robust method of text line segmentation based on the DCT blocks of color image without decompression and binary transformation processes. Using DC and another three primary AC coefficients from block DCT we created a gray-scale image having reduced size by 8x8. In order to detect and locate white strips between text lines we analyzed horizontal and vertical projection profiles of the image and we applied a direct markov model to recover the missing white strips by estimating hidden periodicity. We presented performance results. The results showed that our method was 40 - 100 times faster than traditional method.

An Epipolar Rectification for Object Segmentation (객체분할을 위한 에피폴라 Rectification)

  • Jeong, Seung-Do;Kang, Sung-Suk;CHo, Jung-Won;Choi, Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.83-91
    • /
    • 2004
  • An epipolar rectification is the process of transforming the epipolar geometry of a pair of images into a canonical form. This is accomplished by applying a homography to each image that maps the epipole to a predetermined point. In this process, rectified images transformed by homographies must be satisfied with the epipolar constraint. These homographies are not unique, however, we find out homographies that are suited to system's purpose by means of an additive constraint. Since the rectified image pair be a stereo image pair, we are able to find the disparity efficiently. Therefore, we are able to estimate the three-dimensional information of objects within an image and apply this information to object segmentation. This paper proposes a rectification method for object segmentation and applies the rectification result to the object segmentation. Using color and relative continuity of disparity for the object segmentation, the drawbacks of previous segmentation method, which are that the object is segmented to several region because of having different color information or another object is merged into one because of having similar color information, are complemented. Experimental result shows that the disparity of result image of proposed rectification method have continuity about unique object. Therefore we have confirmed that our rectification method is suitable to the object segmentation.

Content-Based Image Retrieval System using Feature Extraction of Image Objects (영상 객체의 특징 추출을 이용한 내용 기반 영상 검색 시스템)

  • Jung Seh-Hwan;Seo Kwang-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.27 no.3
    • /
    • pp.59-65
    • /
    • 2004
  • This paper explores an image segmentation and representation method using Vector Quantization(VQ) on color and texture for content-based image retrieval system. The basic idea is a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture space. These schemes are used for object-based image retrieval. Features for image retrieval are three color features from HSV color model and five texture features from Gray-level co-occurrence matrices. Once the feature extraction scheme is performed in the image, 8-dimensional feature vectors represent each pixel in the image. VQ algorithm is used to cluster each pixel data into groups. A representative feature table based on the dominant groups is obtained and used to retrieve similar images according to object within the image. The proposed method can retrieve similar images even in the case that the objects are translated, scaled, and rotated.

Enhancement of Tongue Segmentation by Using Data Augmentation (데이터 증강을 이용한 혀 영역 분할 성능 개선)

  • Chen, Hong;Jung, Sung-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.5
    • /
    • pp.313-322
    • /
    • 2020
  • A large volume of data will improve the robustness of deep learning models and avoid overfitting problems. In automatic tongue segmentation, the availability of annotated tongue images is often limited because of the difficulty of collecting and labeling the tongue image datasets in reality. Data augmentation can expand the training dataset and increase the diversity of training data by using label-preserving transformations without collecting new data. In this paper, augmented tongue image datasets were developed using seven augmentation techniques such as image cropping, rotation, flipping, color transformations. Performance of the data augmentation techniques were studied using state-of-the-art transfer learning models, for instance, InceptionV3, EfficientNet, ResNet, DenseNet and etc. Our results show that geometric transformations can lead to more performance gains than color transformations and the segmentation accuracy can be increased by 5% to 20% compared with no augmentation. Furthermore, a random linear combination of geometric and color transformations augmentation dataset gives the superior segmentation performance than all other datasets and results in a better accuracy of 94.98% with InceptionV3 models.

Facial Region Segmentation using Watershed Algorithm based on Depth Information (깊이정보 기반 Watershed 알고리즘을 이용한 얼굴영역 분할)

  • Kim, Jang-Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.4
    • /
    • pp.225-230
    • /
    • 2011
  • In this paper, we propose the segmentation method for detecting the facial region by using watershed based on depth information and merge algorithm. The method consists of three steps: watershed segmentation, seed region detection, and merge. The input color image is segmented into the small uniform regions by watershed. The facial region can be detected by merging the uniform regions with chromaticity and edge constraints. The problem in the existing method using only chromaticity or edge can solved by the proposed method. The computer simulation is performed to evaluate the performance of the proposed method. The simulation results shows that the proposed method is superior to segmentation facial region.

Image-based fire area segmentation method by removing the smoke area from the fire scene videos (화재 현장 영상에서 연기 영역을 제외한 이미지 기반 불의 영역 검출 기법)

  • KIM, SEUNGNAM;CHOI, MYUNGJIN;KIM, SUN-JEONG;KIM, CHANG-HUN
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • In this paper, we propose an algorithm that can accurately segment a fire even when it is surrounded by smoke of a similar color. Existing fire area segmentation algorithms have a problem in that they cannot separate fire and smoke from fire images. In this paper, the fire was successfully separated from the smoke by applying the color compensation method and the fog removal method as a preprocessing process before applying the fire area segmentation algorithm. In fact, it was confirmed that it segments fire more effectively than the existing methods in the image of the fire scene covered with smoke. In addition, we propose a method that can use the proposed fire segmentation algorithm for efficient fire detection in factories and homes.

CRF-Based Figure/Ground Segmentation with Pixel-Level Sparse Coding and Neighborhood Interactions

  • Zhang, Lihe;Piao, Yongri
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this paper, we propose a new approach to learning a discriminative model for figure/ground segmentation by incorporating the bag-of-features and conditional random field (CRF) techniques. We advocate the use of image patches instead of superpixels as the basic processing unit. The latter has a homogeneous appearance and adheres to object boundaries, while an image patch often contains more discriminative information (e.g., local image structure) to distinguish its categories. We use pixel-level sparse coding to represent an image patch. With the proposed feature representation, the unary classifier achieves a considerable binary segmentation performance. Further, we integrate unary and pairwise potentials into the CRF model to refine the segmentation results. The pairwise potentials include color and texture potentials with neighborhood interactions, and an edge potential. High segmentation accuracy is demonstrated on three benchmark datasets: the Weizmann horse dataset, the VOC2006 cow dataset, and the MSRC multiclass dataset. Extensive experiments show that the proposed approach performs favorably against the state-of-the-art approaches.