• Title/Summary/Keyword: 컬러 히스토그램

Search Result 206, Processing Time 0.021 seconds

Systematic Approach to The Extraction of Effective Region for Tongue Diagnosis (설진 유효 영역 추출의 시스템적 접근 방법)

  • Kim, Keun-Ho;Do, Jun-Hyeong;Ryu, Hyun-Hee;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.123-131
    • /
    • 2008
  • In Oriental medicine, the status of a tongue is the important indicator to diagnose the condition of one's health like the physiological and the clinicopathological changes of internal organs in a body. A tongue diagnosis is not only convenient but also non-invasive, and therefore widely used in Oriental medicine. However, the tongue diagnosis is affected by examination circumstances like a light source, patient's posture, and doctor's condition a lot. To develop an automatic tongue diagnosis system for an objective and standardized diagnosis, segmenting a tongue region from a facial image captured and classifying tongue coating are inevitable but difficult since the colors of a tongue, lips, and skin in a mouth are similar. The proposed method includes preprocessing, over-segmenting, detecting the edge with a local minimum over a shading area from the structure of a tongue, correcting local minima or detecting the edge with the greatest color difference, selecting one edge to correspond to a tongue shape, and smoothing edges, where preprocessing consists of down-sampling to reduce computation time, histogram equalization, and edge enhancement, which produces the region of a segmented tongue. Finally, the systematic procedure separated only a tongue region from a face image with a tongue, which was obtained from a digital tongue diagnosis system. Oriental medical doctors' evaluation for the results illustrated that the segmented region excluding a non-tongue region provides important information for the accurate diagnosis. The proposed method can be used for an objective and standardized diagnosis and for an u-Healthcare system.

Content based Video Segmentation Algorithm using Comparison of Pattern Similarity (장면의 유사도 패턴 비교를 이용한 내용기반 동영상 분할 알고리즘)

  • Won, In-Su;Cho, Ju-Hee;Na, Sang-Il;Jin, Ju-Kyong;Jeong, Jae-Hyup;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1252-1261
    • /
    • 2011
  • In this paper, we propose the comparison method of pattern similarity for video segmentation algorithm. The shot boundary type is categorized as 2 types, abrupt change and gradual change. The representative examples of gradual change are dissolve, fade-in, fade-out or wipe transition. The proposed method consider the problem to detect shot boundary as 2-class problem. We concentrated if the shot boundary event happens or not. It is essential to define similarity between frames for shot boundary detection. We proposed 2 similarity measures, within similarity and between similarity. The within similarity is defined by feature comparison between frames belong to same shot. The between similarity is defined by feature comparison between frames belong to different scene. Finally we calculated the statistical patterns comparison between the within similarity and between similarity. Because this measure is robust to flash light or object movement, our proposed algorithm make contribution towards reducing false positive rate. We employed color histogram and mean of sub-block on frame image as frame feature. We performed the experimental evaluation with video dataset including set of TREC-2001 and TREC-2002. The proposed algorithm shows the performance, 91.84% recall and 86.43% precision in experimental circumstance.

A Study on Face Awareness with Free size using Multi-layer Neural Network (다층신경망을 이용한 임의의 크기를 가진 얼굴인식에 관한 연구)

  • Song, Hong-Bok;Seol, Ji-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.149-162
    • /
    • 2005
  • This paper suggest a way to detect a specific wanted figure in public places such as subway stations and banks by comparing color face images extracted from the real time CCTV with the face images of designated specific figures. Assuming that the characteristic of the surveillance camera allows the face information in screens to change arbitrarily and to contain information on numerous faces, the accurate detection of the face area was focused. To solve this problem, the normalization work using subsampling with $20{\times}20$ pixels on arbitrary face images, which is based on the Perceptron Neural Network model suggested by R. Rosenblatt, created the effect of recogning the whole face. The optimal linear filter and the histogram shaper technique were employed to minimize the outside interference such as lightings and light. The addition operation of the egg-shaped masks was added to the pre-treatment process to minimize unnecessary work. The images finished with the pre-treatment process were divided into three reception fields and the information on the specific location of eyes, nose, and mouths was determined through the neural network. Furthermore, the precision of results was improved by constructing the three single-set network system with different initial values in a row.

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.