• Title/Summary/Keyword: Face Detection

Search Result 1,084, Processing Time 0.025 seconds

Clinical and Laboratory Features to Consider Genetic Evaluation among Children and Adolescents with Short Stature

  • Seokjin Kang
    • Journal of Interdisciplinary Genomics
    • /
    • v.5 no.2
    • /
    • pp.18-23
    • /
    • 2023
  • Conventional evaluation method for identifying the organic cause of short stature has a low detection rate. If an infant who is small for gestational age manifests postnatal growth deterioration, triangular face, relative macrocephaly, and protruding forehead, a genetic testing of IGF2, H19, GRB10, MEST, CDKN1, CUL7, OBSL1, and CCDC9 should be considered to determine the presence of Silver-Russell syndrome and 3-M syndrome. If a short patient with prenatal growth failure also exhibits postnatal growth failure, microcephaly, low IGF-1 levels, sensorineural deafness, or impaired intellectual development, genetic testing of IGF1 and IGFALS should be conducted. Furthermore, genetic testing of GH1, GHRHR, HESX1, SOX3, PROP1, POU1F1, and LHX3 should be considered if patients with isolated growth hormone deficiency have short stature below -3 standard deviation score, barely detectable serum growth hormone concentration, and other deficiencies of anterior pituitary hormone. In short patients with height SDS <-3 and high growth hormone levels, genetic testing should be considered to identify GHR mutations. Lastly, when severe short patients (height z score <-3) exhibit high levels of prolactin and recurrent pulmonary infection, genetic testing should be conducted to identify STAT5B mutations.

A Study on a Method for Detecting Leak Holes in Respirators Using IoT Sensors

  • Woochang Shin
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.378-385
    • /
    • 2023
  • The importance of wearing respiratory protective equipment has been highlighted even more during the COVID-19 pandemic. Even if the suitability of respiratory protection has been confirmed through testing in a laboratory environment, there remains the potential for leakage points in the respirators due to improper application by the wearer, damage to the equipment, or sudden movements in real working conditions. In this paper, we propose a method to detect the occurrence of leak holes by measuring the pressure changes inside the mask according to the wearer's breathing activity by attaching an IoT sensor to a full-face respirator. We designed 9 experimental scenarios by adjusting the degree of leak holes of the respirator and the breathing cycle time, and acquired respiratory data for the wearer of the respirator accordingly. Additionally, we analyzed the respiratory data to identify the duration and pressure change range for each breath, utilizing this data to train a neural network model for detecting leak holes in the respirator. The experimental results applying the developed neural network model showed a sensitivity of 100%, specificity of 94.29%, and accuracy of 97.53%. We conclude that the effective detection of leak holes can be achieved by incorporating affordable, small-sized IoT sensors into respiratory protective equipment.

Optimization of Memristor Devices for Reservoir Computing (축적 컴퓨팅을 위한 멤리스터 소자의 최적화)

  • Kyeongwoo Park;HyeonJin Sim;HoBin Oh;Jonghwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.1-6
    • /
    • 2024
  • Recently, artificial neural networks have been playing a crucial role and advancing across various fields. Artificial neural networks are typically categorized into feedforward neural networks and recurrent neural networks. However, feedforward neural networks are primarily used for processing static spatial patterns such as image recognition and object detection. They are not suitable for handling temporal signals. Recurrent neural networks, on the other hand, face the challenges of complex training procedures and requiring significant computational power. In this paper, we propose memristors suitable for an advanced form of recurrent neural networks called reservoir computing systems, utilizing a mask processor. Using the characteristic equations of Ti/TiOx/TaOy/Pt, Pt/TiOx/Pt, and Ag/ZnO-NW/Pt memristors, we generated current-voltage curves to verify their memristive behavior through the confirmation of hysteresis. Subsequently, we trained and inferred reservoir computing systems using these memristors with the NIST TI-46 database. Among these systems, the accuracy of the reservoir computing system based on Ti/TiOx/TaOy/Pt memristors reached 99%, confirming the Ti/TiOx/TaOy/Pt memristor structure's suitability for inferring speech recognition tasks.

  • PDF

The Nevus Lipomatosus Superficialis of Face: A Case Report and Literature Review

  • Jae-Won Yang;Mi-Ok Park
    • Archives of Plastic Surgery
    • /
    • v.51 no.2
    • /
    • pp.196-201
    • /
    • 2024
  • Nevus lipomatosus superficialis (NLS) is a hamartoma of adipose tissue, rarely reported in the past 100 years. We treated one case, and we conducted a systematic review of the literature. A 41-year-old man presented with a cutaneous multinodular lesion in the posterior region near the right auricle. The lesion was excised and examined histopathologically. To review the literature, we searched PubMed with the keyword "NLS." The search was limited to articles written in English and whose full text was available. We analyzed the following data: year of report, nation of corresponding author, sex of patient, age at onset, duration of disease, location of lesion, type of lesion, associated symptoms, pathological findings, and treatment. Of 158 relevant articles in PubMed, 112 fulfilled our inclusion criteria; these referred to a total of 149 cases (cases with insufficient clinical information were excluded). In rare cases, the diagnosis of NLS was confirmed when the lesion coexisted with sebaceous trichofolliculoma and Demodex infestation. Clinical awareness for NLS has increased recently. NLS is an indolent and asymptomatic benign neoplasm that may exhibit malignant behavior in terms of huge lesion size and specific anatomical location. Early detection and curative treatment should be promoted.

Enhancing Data Protection in Digital Communication: A Novel Method of Combining Steganography and Encryption

  • Khaled H. Abuhmaidan;Marwan A. Al-Share;Abdallah M. Abualkishik;Ahmad Kayed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1619-1637
    • /
    • 2024
  • In today's highly digitized landscape, securing digital communication is paramount due to threats like hacking, unauthorized data access, and network policy violations. The response to these challenges has been the development of cryptography applications, though many existing techniques face issues of complexity, efficiency, and limitations. Notably, sophisticated intruders can easily discern encrypted data during transmission, casting doubt on overall security. In contrast to encryption, steganography offers the unique advantage of concealing data without easy detection, although it, too, grapples with challenges. The primary hurdles in image steganography revolve around the quality and payload capacity of the cover image, which are persistently compromised. This article introduces a pioneering approach that integrates image steganography and encryption, presenting the BitPatternStego method. This novel technique addresses prevalent issues in image steganography, such as stego-image quality and payload, by concealing secret data within image pixels with identical bit patterns as their characters. Consequently, concerns regarding the quality and payload capacity of steganographic images become obsolete. Moreover, the BitPatternStego method boasts the capability to generate millions of keys for the same secret message, offering a robust and versatile solution to the evolving landscape of digital security challenges.

A study on the enhancement of emotion recognition through facial expression detection in user's tendency (사용자의 성향 기반의 얼굴 표정을 통한 감정 인식률 향상을 위한 연구)

  • Lee, Jong-Sik;Shin, Dong-Hee
    • Science of Emotion and Sensibility
    • /
    • v.17 no.1
    • /
    • pp.53-62
    • /
    • 2014
  • Despite the huge potential of the practical application of emotion recognition technologies, the enhancement of the technologies still remains a challenge mainly due to the difficulty of recognizing emotion. Although not perfect, human emotions can be recognized through human images and sounds. Emotion recognition technologies have been researched by extensive studies that include image-based recognition studies, sound-based studies, and both image and sound-based studies. Studies on emotion recognition through facial expression detection are especially effective as emotions are primarily expressed in human face. However, differences in user environment and their familiarity with the technologies may cause significant disparities and errors. In order to enhance the accuracy of real-time emotion recognition, it is crucial to note a mechanism of understanding and analyzing users' personality traits that contribute to the improvement of emotion recognition. This study focuses on analyzing users' personality traits and its application in the emotion recognition system to reduce errors in emotion recognition through facial expression detection and improve the accuracy of the results. In particular, the study offers a practical solution to users with subtle facial expressions or low degree of emotion expression by providing an enhanced emotion recognition function.

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.

Speech Activity Decision with Lip Movement Image Signals (입술움직임 영상신호를 고려한 음성존재 검출)

  • Park, Jun;Lee, Young-Jik;Kim, Eung-Kyeu;Lee, Soo-Jong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • This paper describes an attempt to prevent the external acoustic noise from being misrecognized as the speech recognition target. For this, in the speech activity detection process for the speech recognition, it confirmed besides the acoustic energy to the lip movement image signal of a speaker. First of all, the successive images are obtained through the image camera for PC. The lip movement whether or not is discriminated. And the lip movement image signal data is stored in the shared memory and shares with the recognition process. In the meantime, in the speech activity detection Process which is the preprocess phase of the speech recognition. by conforming data stored in the shared memory the acoustic energy whether or not by the speech of a speaker is verified. The speech recognition processor and the image processor were connected and was experimented successfully. Then, it confirmed to be normal progression to the output of the speech recognition result if faced the image camera and spoke. On the other hand. it confirmed not to output of the speech recognition result if did not face the image camera and spoke. That is, if the lip movement image is not identified although the acoustic energy is inputted. it regards as the acoustic noise.

Robust Eye Localization using Multi-Scale Gabor Feature Vectors (다중 해상도 가버 특징 벡터를 이용한 강인한 눈 검출)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.25-36
    • /
    • 2008
  • Eye localization means localization of the center of the pupils, and is necessary for face recognition and related applications. Most of eye localization methods reported so far still need to be improved about robustness as well as precision for successful applications. In this paper, we propose a robust eye localization method using multi-scale Gabor feature vectors without big computational burden. The eye localization method using Gabor feature vectors is already employed in fuck as EBGM, but the method employed in EBGM is known not to be robust with respect to initial values, illumination, and pose, and may need extensive search range for achieving the required performance, which may cause big computational burden. The proposed method utilizes multi-scale approach. The proposed method first tries to localize eyes in the lower resolution face image by utilizing Gabor Jet similarity between Gabor feature vector at an estimated initial eye coordinates and the Gabor feature vectors in the eye model of the corresponding scale. Then the method localizes eyes in the next scale resolution face image in the same way but with initial eye points estimated from the eye coordinates localized in the lower resolution images. After repeating this process in the same way recursively, the proposed method funally localizes eyes in the original resolution face image. Also, the proposed method provides an effective illumination normalization to make the proposed multi-scale approach more robust to illumination, and additionally applies the illumination normalization technique in the preprocessing stage of the multi-scale approach so that the proposed method enhances the eye detection success rate. Experiment results verify that the proposed eye localization method improves the precision rate without causing big computational overhead compared to other eye localization methods reported in the previous researches and is robust to the variation of post: and illumination.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.