• Title/Summary/Keyword: Face Detecting

Search Result 194, Processing Time 0.027 seconds

Comparison of two different methods of detecting residual caries

  • Vural, Uzay Koc;Kutuk, Zeynep Bilge;Ergin, Esra;Cakir, Filiz Yalcin;Gurgan, Sevil
    • Restorative Dentistry and Endodontics
    • /
    • v.42 no.1
    • /
    • pp.48-53
    • /
    • 2017
  • Objectives: The aim of this study was to investigate the ability of the fluorescence-aided caries excavation (FACE) device to detect residual caries by comparing conventional methods in vivo. Materials and Methods: A total of 301 females and 202 males with carious teeth participated in this study. The cavity preparations were done by grade 4 (Group 1, 154 teeth), grade 5 (Group 2, 176 teeth), and postgraduate (Group 3, 173 teeth) students. After caries excavation using a handpiece and hand instruments, the presence of residual caries was evaluated by 2 investigators who were previously calibrated for visual-tactile assessment with and without magnifying glasses and trained in the use of a FACE device. The tooth number, cavity type, and presence or absence of residual caries were recorded. The data were analyzed using the Chi-square test, the Fisher's Exact test, or the McNemar test as appropriate. Kappa statistics was used for calibration. In all tests, the level of significance was set at p = 0.05. Results: Almost half of the cavities prepared were Class II (Class I, 20.9%; Class II, 48.9%; Class III, 20.1%; Class IV, 3.4%; Class V, 6.8%). Higher numbers of cavities left with caries were observed in Groups 1 and 2 than in Group 3 for all examination methods. Significant differences were found between visual inspection with or without magnifying glasses and inspection with a FACE device for all groups (p < 0.001). More residual caries were detected through inspection with a FACE device (46.5%) than through either visual inspection (31.8%) or inspection with a magnifying glass (37.6%). Conclusions: Within the limitations of this study, the FACE device may be an effective method for the detection of residual caries.

The gaze cueing effect depending on the orientations of the face and its background (얼굴과 배경의 방향에 따른 시선 단서 효과)

  • Lijeong, Hong;Min-Shik, Kim
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.2
    • /
    • pp.85-110
    • /
    • 2023
  • The gaze cueing effect appears as detecting a target rapidly and accurately when the direction of others' gaze corresponds with the location of the visual target. The gaze cue can be affected by the orientation of the face. The gaze cueing effect is strong when the face is presented upright, but the effect has only been observed in some studies when the face is presented inverted(e.g., Tipples, 2005). This study aimed to examine whether the gaze can operate as a cue to guide attention with upright faces, and to add variables that can affect the gaze cue, such as the orientation of the face, the orientation of the background, and a time interval between the gaze cue and the target(SOA). Furthermore, it systematically manipulated these variables to explore whether the gaze cueing effect can be observed under the various conditions. The results showed a significant gaze cueing effect even on the inverted face, contrasting with previous studies. These findings were consistently observed when the background stimulus was absent(Experiment 1) and present(Experiments 2 and 3). However, there was no significant interaction in the orientations between the face and the background. Moreover, in the short SOA(150 ms), we found a significant gaze cueing effect in conditions of every face and background orientation, whereas there was no significant gaze cueing effect in the long SOA(1000 ms). By presenting a consistent observation of the gaze cueing effect under the short SOA(150ms) even in the inverted faces, the results of this study pose questions about the reliability and repeatability of previous studies that did not report significant results of gaze cueing effects in that faces. Furthermore, our results are meaningful in providing additional evidence that attention can be guided toward the direction of the gaze even in various directions of the face and background.

Machine Learning-Based Malicious URL Detection Technique (머신러닝 기반 악성 URL 탐지 기법)

  • Han, Chae-rim;Yun, Su-hyun;Han, Myeong-jin;Lee, Il-Gu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.555-564
    • /
    • 2022
  • Recently, cyberattacks are using hacking techniques utilizing intelligent and advanced malicious codes for non-face-to-face environments such as telecommuting, telemedicine, and automatic industrial facilities, and the damage is increasing. Traditional information protection systems, such as anti-virus, are a method of detecting known malicious URLs based on signature patterns, so unknown malicious URLs cannot be detected. In addition, the conventional static analysis-based malicious URL detection method is vulnerable to dynamic loading and cryptographic attacks. This study proposes a technique for efficiently detecting malicious URLs by dynamically learning malicious URL data. In the proposed detection technique, malicious codes are classified using machine learning-based feature selection algorithms, and the accuracy is improved by removing obfuscation elements after preprocessing using Weighted Euclidean Distance(WED). According to the experimental results, the proposed machine learning-based malicious URL detection technique shows an accuracy of 89.17%, which is improved by 2.82% compared to the conventional method.

A Method of Eye and Lip Region Detection using Faster R-CNN in Face Image (초고속 R-CNN을 이용한 얼굴영상에서 눈 및 입술영역 검출방법)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.8
    • /
    • pp.1-8
    • /
    • 2018
  • In the field of biometric security such as face and iris recognition, it is essential to extract facial features such as eyes and lips. In this paper, we have studied a method of detecting eye and lip region in face image using faster R-CNN. The faster R-CNN is an object detection method using deep running and is well known to have superior performance compared to the conventional feature-based method. In this paper, feature maps are extracted by applying convolution, linear rectification process, and max pooling process to facial images in order. The RPN(region proposal network) is learned using the feature map to detect the region proposal. Then, eye and lip detector are learned by using the region proposal and feature map. In order to examine the performance of the proposed method, we experimented with 800 face images of Korean men and women. We used 480 images for the learning phase and 320 images for the test one. Computer simulation showed that the average precision of eye and lip region detection for 50 epoch cases is 97.7% and 91.0%, respectively.

Developments of real-time monitoring system to measure displacements on face of tunnel in weak rock (위험지반 터널 굴진면의 실시간 변위 감시를 위한 계측시스템 개발)

  • Yun, Hyun-Seok;Song, Gyu-Jin;Kim, Yeong-Bae;Kim, Chang-Yong;Seo, Yong-Seok
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.17 no.4
    • /
    • pp.441-455
    • /
    • 2015
  • In the present study, a face safety monitoring system was developed that will enable judging collapse risks on faces during tunnel construction to secure workers' safety. This system enables detecting abnormal behaviors of faces by analyzing the displacement of faces measured in real time using the x-MR control chart technique. In addition, an algorithm to judge false alarms was developed so that abnormal behaviors of faces and errors occurring in the process of work can be distinguished from each other by comparing the number of measured values exceeding the management criteria and moving range k. The results of the present study are applicable to real-time monitoring of behavior on the face in dangerous ground sections to minimize damage to workers.

A Tracking Algorithm to Certain People Using Recognition of Face and Cloth Color and Motion Analysis with Moving Energy in CCTV (폐쇄회로 카메라에서 운동에너지를 이용한 모션인식과 의상색상 및 얼굴인식을 통한 특정인 추적 알고리즘)

  • Lee, In-Jung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.197-204
    • /
    • 2008
  • It is well known that the tracking a certain person is a vary needed technic in the humanoid robot. In robot technic, we should consider three aspects that is cloth color matching, face recognition and motion analysis. Because a robot technic use some sensors, it is many different with the robot technic to track a certain person through the CCTV images. A system speed should be fast in CCTV images, hence we must have small calculation numbers. We need the statistical variable for color matching and we adapt the eigen-face for face recognition to speed up the system. In this situation, motion analysis have to added for the propose of the efficient detecting system. But, in many motion analysis systems, the speed and the recognition rate is low because the system operates on the all image area. In this paper, we use the moving energy only on the face area which is searched when the face recognition is processed, since the moving energy has low calculation numbers. When the proposed algorithm has been compared with Girondel, V. et al's method for experiment, we obtained same recognition rate as Girondel, V., the speed of the proposed algorithm was the more faster. When the LDA has been used, the speed was same and the recognition rate was better than Girondel, V.'s method, consequently the proposed algorithm is more efficient for tracking a certain person.

Face Identification Using a Near-Infrared Camera in a Nonrestrictive In-Vehicle Environment (적외선 카메라를 이용한 비제약적 환경에서의 얼굴 인증)

  • Ki, Min Song;Choi, Yeong Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.99-108
    • /
    • 2021
  • There are unrestricted conditions on the driver's face inside the vehicle, such as changes in lighting, partial occlusion and various changes in the driver's condition. In this paper, we propose a face identification system in an unrestricted vehicle environment. The proposed method uses a near-infrared (NIR) camera to minimize the changes in facial images that occur according to the illumination changes inside and outside the vehicle. In order to process a face exposed to extreme light, the normal face image is changed to a simulated overexposed image using mean and variance for training. Thus, facial classifiers are simultaneously generated under both normal and extreme illumination conditions. Our method identifies a face by detecting facial landmarks and aggregating the confidence score of each landmark for the final decision. In particular, the performance improvement is the highest in the class where the driver wears glasses or sunglasses, owing to the robustness to partial occlusions by recognizing each landmark. We can recognize the driver by using the scores of remaining visible landmarks. We also propose a novel robust rejection and a new evaluation method, which considers the relations between registered and unregistered drivers. The experimental results on our dataset, PolyU and ORL datasets demonstrate the effectiveness of the proposed method.

Extraction of Tongue Region using Graph and Geometric Information (그래프 및 기하 정보를 이용한 설진 영역 추출)

  • Kim, Keun-Ho;Lee, Jeon;Choi, Eun-Ji;Ryu, Hyun-Hee;Kim, Jong-Yeol
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.11
    • /
    • pp.2051-2057
    • /
    • 2007
  • In Oriental medicine, the status of a tongue is the important indicator to diagnose one's health like physiological and clinicopathological changes of inner parts of the body. The method of tongue diagnosis is not only convenient but also non-invasive and widely used in Oriental medicine. However, tongue diagnosis is affected by examination circumstances a lot like a light source, patient's posture and doctor's condition. To develop an automatic tongue diagnosis system for an objective and standardized diagnosis, segmenting a tongue is inevitable but difficult since the colors of a tongue, lips and skin in a mouth are similar. The proposed method includes preprocessing, graph-based over-segmentation, detecting positions with a local minimum over shading, detecting edge with color difference and estimating edge geometry from the probable structure of a tongue, where preprocessing performs down-sampling to reduce computation time, histogram equalization and edge enhancement. A tongue was segmented from a face image with a tongue from a digital tongue diagnosis system by the proposed method. According to three oriental medical doctors' evaluation, it produced the segmented region to include effective information and exclude a non-tongue region. It can be used to make an objective and standardized diagnosis.

Metal Object Detection System For Drive Inside Protection (내부 운전자 보호를 위한 금속 물체 탐지 시스템)

  • Kim, Jin-Kyu;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.5
    • /
    • pp.609-614
    • /
    • 2009
  • The purpose of this paper is to design the metal object detection system for drive inside protection. To do this, we propose the algorithm for designing the color filter that can detect the metal object using fuzzy theory and the algorithm for detecting area of the driver's face using fuzzy skin color filter. Also, by using the proposed algorithm, we propose the algorithm for detecting the metallic object candidate regions. And, the metallic object color filter is then applied to find the candidate regions. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Design and Implementation of Fire Detection System Using New Model Mixing

  • Gao, Gao;Lee, SangHyun
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.260-267
    • /
    • 2021
  • In this paper, we intend to use a new mixed model of YoloV5 and DeepSort. For fire detection, we want to increase the accuracy by automatically extracting the characteristics of the flame in the image from the training data and using it. In addition, the high false alarm rate, which is a problem of fire detection, is to be solved by using this new mixed model. To confirm the results of this paper, we tested indoors and outdoors, respectively. Looking at the indoor test results, the accuracy of YoloV5 was 75% at 253Frame and 77% at 527Frame, and the YoloV5+DeepSort model showed the same accuracy at 75% at 253 frames and 77% at 527 frames. However, it was confirmed that the smoke and fire detection errors that appeared in YoloV5 disappeared. In addition, as a result of outdoor testing, the YoloV5 model had an accuracy of 75% in detecting fire, but an error in detecting a human face as smoke appeared. However, as a result of applying the YoloV5+DeepSort model, it appeared the same as YoloV5 with an accuracy of 75%, but it was confirmed that the false positive phenomenon disappeared.