• Title/Summary/Keyword: Mouth Detection

Search Result 152, Processing Time 0.022 seconds

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

Object Segmentation for Image Transmission Services and Facial Characteristic Detection based on Knowledge (화상전송 서비스를 위한 객체 분할 및 지식 기반 얼굴 특징 검출)

  • Lim, Chun-Hwan;Yang, Hong-Young
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.3
    • /
    • pp.26-31
    • /
    • 1999
  • In this paper, we propose a facial characteristic detection algorithm based on knowledge and object segmentation method for image communication. In this algorithm, under the condition of the same lumination and distance from the fixed video camera to human face, we capture input images of 256 $\times$ 256 of gray scale 256 level and then remove the noise using the Gaussian filter. Two images are captured with a video camera, One contains the human face; the other contains only background region without including a face. And then we get a differential image between two images. After removing noise of the differential image by eroding End dilating, divide background image into a facial image. We separate eyes, ears, a nose and a mouth after searching the edge component in the facial image. From simulation results, we have verified the efficiency of the Proposed algorithm.

  • PDF

Knowledge and Opinions Regarding Oral Cancer among Yemeni Dental Students

  • Al-Maweri, Sadeq Ali;Abbas, Alkasem;Tarakji, Bassel;Al-Jamaei, Aisha Saleh;Alaizari, Nader Ahmed;Al-Shamiri, Hashem M
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.5
    • /
    • pp.1765-1770
    • /
    • 2015
  • Background: Oral cancer presents with high mortality rates, and the likelihood of survival is remarkably superior when detected early. Health care providers, particularly dentists, play a critical role in early detection of oral cancers and should be knowledgeable and skillful in oral cancer diagnosis. Purpose: The aim of the present study was to assess the current knowledge of future Yemeni dentists and their opinions on oral cancer. Materials and Methods: A pretested self-administered questionnaire was distributed to fourth and fifth year dental students. Questions relating to knowledge of oral cancer, risk factors, and opinions on oral cancer prevention and practices were posed. Results: The response rate was 80%. The vast majority of students identified smoking and smokeless tobacco as the major risk factors for oral cancer. Most of the students (92.6%) knew that squamous cell carcinoma is the most common form of oral cancer, and 85.3% were aware that tongue and floor of the mouth are the most likely sites. While the majority showed willingness to advise their patients on risk factors, only 40% felt adequately trained to provide such advice. More than 85% of students admitted that they need further information regarding oral cancer. As expected, students of the final year appeared slightly more knowledgeable regarding risk factors and clinical features of the disease. Conclusions: The findings of the present study suggest that here is a need to reinforce the undergraduate dental curriculum with regards to oral cancer education, particularly in its prevention and early detection.

Conversation Context Annotation using Speaker Detection (화자인식을 이용한 대화 상황정보 어노테이션)

  • Park, Seung-Bo;Kim, Yoo-Won;Jo, Geun-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1252-1261
    • /
    • 2009
  • One notable challenge in video searching and summarizing is extracting semantic from video contents and annotating context for video contents. Video semantic or context could be obtained by two methods to extract objects and contexts between objects from video. However, the method that use just to extracts objects do not express enough semantic for shot or scene as it does not describe relation and interaction between objects. To be more effective, after extracting some objects, context like relation and interaction between objects needs to be extracted from conversation situation. This paper is a study for how to detect speaker and how to compose context for talking to annotate conversation context. For this, based on this study, we proposed the methods that characters are recognized through face recognition technology, speaker is detected through mouth motion, conversation context is extracted using the rule that is composed of speaker existing, the number of characters and subtitles existing and, finally, scene context is changed to xml file and saved.

  • PDF

Facial Recognition Algorithm Based on Edge Detection and Discrete Wavelet Transform

  • Chang, Min-Hyuk;Oh, Mi-Suk;Lim, Chun-Hwan;Ahmad, Muhammad-Bilal;Park, Jong-An
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.4
    • /
    • pp.283-288
    • /
    • 2001
  • In this paper, we proposed a method for extracting facial characteristics of human being in an image. Given a pair of gray level sample images taken with and without human being, the face of human being is segmented from the image. Noise in the input images is removed with the help of Gaussian filters. Edge maps are found of the two input images. The binary edge differential image is obtained from the difference of the two input edge maps. A mask for face detection is made from the process of erosion followed by dilation on the resulting binary edge differential image. This mask is used to extract the human being from the two input image sequences. Features of face are extracted from the segmented image. An effective recognition system using the discrete wave let transform (DWT) is used for recognition. For extracting the facial features, such as eyebrows, eyes, nose and mouth, edge detector is applied on the segmented face image. The area of eye and the center of face are found from horizontal and vertical components of the edge map of the segmented image. other facial features are obtained from edge information of the image. The characteristic vectors are extrated from DWT of the segmented face image. These characteristic vectors are normalized between +1 and -1, and are used as input vectors for the neural network. Simulation results show recognition rate of 100% on the learned system, and about 92% on the test images.

  • PDF

Skew correction of face image using eye components extraction (눈 영역 추출에 의한 얼굴 기울기 교정)

  • Yoon, Ho-Sub;Wang, Min;Min, Byung-Woo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.12
    • /
    • pp.71-83
    • /
    • 1996
  • This paper describes facial component detection and skew correction algorithm for face recognition. We use a priori knowledge and models about isolated regions to detect eye location from the face image captured in natural office environments. The relations between human face components are represented by several rules. We adopt an edge detection algorithm using sobel mask and 8-connected labelling algorith using array pointers. A labeled image has many isolated components. initially, the eye size rules are used. Eye size rules are not affected much by irregular input image conditions. Eye size rules size, and limited in the ratio between gorizontal and vertical sizes. By the eye size rule, 2 ~ 16 candidate eye components can be detected. Next, candidate eye parirs are verified by the information of location and shape, and one eye pair location is decided using face models about eye and eyebrow. Once we extract eye regions, we connect the center points of the two eyes and calculate the angle between them. Then we rotate the face to compensate for the angle so that the two eyes on a horizontal line. We tested 120 input images form 40 people, and achieved 91.7% success rate using eye size rules and face model. The main reasons of the 8.3% failure are due to components adjacent to eyes such as eyebrows. To detect facial components from the failed images, we are developing a mouth region processing module.

  • PDF

Salivary biomarkers in oral squamous cell carcinoma

  • Nguyen, Truc Thi Hoang;Sodnom-Ish, Buyanbileg;Choi, Sung Weon;Jung, Hyo-Il;Cho, Jaewook;Hwang, Inseong;Kim, Soung Min
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.46 no.5
    • /
    • pp.301-312
    • /
    • 2020
  • In disease diagnostics and health surveillance, the use of saliva has potential because its collection is convenient and noninvasive. Over the past two decades, the development of salivary utilization for the early detection of cancer, especially oral cavity and oropharynx cancer has gained the interest of the researcher and clinician. Until recently, the oral cavity and oropharynx cancers are still having a five-year survival rate of 62%, one of the lowest in all major human cancers. More than 90% of oral cancers are oral squamous cell carcinoma (OSCC). Despite the ease of accessing the oral cavity in clinical examination, most OSCC lesions are not diagnosed in the early stage, which is suggested to be the main cause of the low survival rate. Many studies have been performed and reported more than 100 potential saliva biomarkers for OSCC. However, there are still obstacles in figuring out the reliable OSCC salivary biomarkers and the clinical application of the early diagnosis protocol. The current review article discusses the emerging issues and is hoped to raise awareness of this topic in both researchers and clinicians. We also suggested the potential salivary biomarkers that are reliable, specific, and sensitive for the early detection of OSCC.

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

Evaluation of different molecular methods for detection of Senecavirus A and the result of the antigen surveillance in Korea during 2018

  • Heo, JinHwa;Lee, Min-Jung;Kim, HyunJoo;Lee, SuKyung;Choi, Jida;Kang, Hae-Eun;Nam, Hyang-Mi;Nah, JinJu
    • Korean Journal of Veterinary Service
    • /
    • v.44 no.1
    • /
    • pp.15-19
    • /
    • 2021
  • Senecavirus A (SVA), previously known as Seneca Valley virus, can cause vesicular disease and neonatal losses in pigs that is clinically indistinguishable from foot-and-mouth disease virus (FMDV). After the first case report in Canada in 2007, it had been restrictively identified in North America including United States. But, since 2015, SVA emerged outside North America in Brazil, and also in several the Asian countries including China, Thailand, and Vietnam. Considering the SVA occurrence in neighboring countries, there has been a high risk that Korea can be introduced at any time. In particular, it is very important in terms of differential diagnosis in the suspected case of vesicular diseases in countries where FMD is occurring. So far, several different molecular detection methods for SVV have been published but not validated as the reference method, yet. In this study, seven different molecular methods for detecting SVA were evaluated. Among them, the method by Flowler et al, (2017) targeted to 3D gene region with the highest sensitivity and no cross reaction with other vesicular disease agents including FMDV, VSV and SVD, was selected and applied further to antigen surveillance of SVA. A total of 245 samples of 157 pigs from 61 farms submitted for animal disease diagnose nationwide during 2018 were tested all negative. In 2018, no sign of SVA occurrence have been confirmed in Korea, but the results of the surveillance for SVA needs to be continued and accumulated at a high risk of SVA in neighboring countries.

Signal Detection for Adverse Events of Finasteride Using Korea Adverse Event Reporting System (KAERS) Database (의약품이상사례보고시스템 데이터베이스를 이용한 피나스테리드의 약물유해반응 실마리 정보 탐색)

  • Baek, Ji-Won;Yang, Bo Ram;Choi, Subin;Shin, Kwang-Hee
    • Korean Journal of Clinical Pharmacy
    • /
    • v.31 no.4
    • /
    • pp.324-331
    • /
    • 2021
  • To investigate signals of adverse drug reactions of finasteride by using the Korea Adverse Events Reporting System (KAERS) database. This pharmacovigilance was based on the database of the drug-related adverse reactions reported spontaneously to the KAERS from 2013 to 2017. This study was conducted by disproportionality analysis. Data mining analysis was performed to detect signals of finasteride. The signal was defined by three criteria as proportional reporting ratio (PRR), reporting odds ratio (ROR), and information component (IC). The signals of finasteride were compared with those of the other drugs; dutasteride (similar mechanism of action), minoxidil (different mechanism but similar indications for alopecia), silodosin (different mechanism but similar indications for BPH). It was examined whether the detected signals exist in drug labels in Korea. The total number of adverse event-drug pairs was reported 2,665,429 from 2013 to 2017, of which 1,426 were associated with finasteride. The number of investigated signals of finasteride was 42. The signals that did not include in the drug label were 29 signals, including mouth dry, hypotension, dysuria etc. The signal of finasteride was similar to that of dutasteride and silodosin but was different to that of minoxidil. Early detection of signals through pharmacovigilance is important to patient safety. We investigated 29 signals of finasteride that do not exist in drug labels in Korea. Further pharmacoepidemiological studies should be needed to evaluate the signal causality with finasteride.