• Title/Summary/Keyword: face pose

Search Result 189, Processing Time 0.026 seconds

Study of Posture Evaluation Method in Chest PA Examination based on Artificial Intelligence (인공지능 기반 흉부 후전방향 검사에서 자세 평가 방법에 관한 연구)

  • Ho Seong Hwang;Yong Seok Choi;Dae Won Lee;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.167-175
    • /
    • 2023
  • Chest PA is the basic examination of radiographic imaging. Moreover, Chest PA's demands are constantly increasing because of the Increase in respiratory diseases. However, it is not meeting the demand due to problems such as a shortage of radiological technologist, sexual shame caused by patient contact, and the spread of infectious diseases. There have been many cases of using artificial intelligence to solve this problem. Therefore, the purpose of this research is to build an artificial intelligence dataset of Chest PA and to find a posture evaluation method. To construct the posture dataset, the posture image is acquired during actual and simulated examination and classified correct and incorrect posture of the patient. And to evaluate the artificial intelligence posture method, a posture estimation algorithm is used to preprocess the dataset and an artificial intelligence classification algorithm is applied. As a result, Chest PA posture dataset is validated with in over 95% accuracy in all artificial intelligence classification and the accuracy is improved through the Top-Down posture estimation algorithm AlphaPose and the classification InceptionV3 algorithm. Based on this, it will be possible to build a non-face-to-face automatic Chest PA examination system using artificial intelligence.

A Study on Correction and Prevention System of Real-time Forward Head Posture (실시간 거북목 증후군 자세 교정 및 예방 시스템 연구)

  • Woo-Seok Choi;Ji-Mi Choi;Hyun-Min Cho;Jeong-Min Park;Kwang-in Kwak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.147-156
    • /
    • 2024
  • This paper introduces the design of a turtle neck posture correction and prevention system for users of digital devices for a long time. The number of forward head posture patients in Korea increased by 13% from 2018 to 2021, and has not yet improved according to the latest statistics at the present time. Because of the nature of the disease, prevention is more important than treatment. Therefore, in this paper, we designed a system based on built-camera in most laptops to increase the accessiblility of the system, and utilize the features such as Pose Estimation, Face Landmarks Detection, Iris Tracking, and Depth Estimation of Google Mediapipe to prevent the need to produce artificial intelligence models and allow users to easily prevent forward head posture.

Robust Face Alignment using Progressive AAM (점진적 AAM을 이용한 강인한 얼굴 윤곽 검출)

  • Kim, Dae-Hwan;Kim, Jae-Min;Cho, Seong-Won;Jang, Yong-Suk;Kim, Boo-Gyoun;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.11-20
    • /
    • 2007
  • AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In this paper, we propose a face alignment method using progressive AAM. The proposed method consists of two stages; modelling and relation derivation stage and fitting stage. Modelling and relation derivation stage first builds two AAM models; the inner face AAM model and the whole face AAM model and then derive the relation matrix between the inner face AAM model parameter vector and the whole face AAM model parameter vector. The fitting stage is processed progressively in two phases. In the first phase, the proposed method finds the feature parameters for the inner facial feature points of a new face, and then in the second phase it localizes the whole facial feature points of the new face using the initial values estimated utilizing the inner feature parameters obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment method is more robust with respect to pose, and face background than the conventional basic AAM-based face alignment.

A Method for Determining Face Recognition Suitability of Face Image (얼굴영상의 얼굴인식 적합성 판정 방법)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.295-302
    • /
    • 2018
  • Face recognition (FR) has been widely used in various applications, such as smart surveillance systems, immigration control in airports, user authentication in smart devices, and so on. FR in well-controlled conditions has been extensively studied and is relatively mature. However, in unconstrained conditions, FR performance could degrade due to undesired characteristics of the input face image (such as irregular facial pose variations). To overcome this problem, this paper proposes a new method for determining if an input image is suitable for FR. In the proposed method, for an input face image, reconstruction error is computed by using a predefined set of reference face images. Then, suitability can be determined by comparing the reconstruction error with a threshold value. In order to reduce the effect of illumination changes on the determination of suitability, a preprocessing algorithm is applied to the input and reference face images before the reconstruction. Experimental results show that the proposed method is able to accurately discriminate non-frontal and/or incorrectly aligned face images from correctly aligned frontal face images. In addition, only 3 ms is required to process a face image of $64{\times}64$ pixels, which further demonstrates the efficiency of the proposed method.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Facial Local Region Based Deep Convolutional Neural Networks for Automated Face Recognition (자동 얼굴인식을 위한 얼굴 지역 영역 기반 다중 심층 합성곱 신경망 시스템)

  • Kim, Kyeong-Tae;Choi, Jae-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.4
    • /
    • pp.47-55
    • /
    • 2018
  • In this paper, we propose a novel face recognition(FR) method that takes advantage of combining weighted deep local features extracted from multiple Deep Convolutional Neural Networks(DCNNs) learned with a set of facial local regions. In the proposed method, the so-called weighed deep local features are generated from multiple DCNNs each trained with a particular face local region and the corresponding weight represents the importance of local region in terms of improving FR performance. Our weighted deep local features are applied to Joint Bayesian metric learning in conjunction with Nearest Neighbor(NN) Classifier for the purpose of FR. Systematic and comparative experiments show that our proposed method is robust to variations in pose, illumination, and expression. Also, experimental results demonstrate that our method is feasible for improving face recognition performance.

An Improved Face Detection Method Using a Hybrid of Hausdorff and LBP Distance (Hausdorff와 LBP 거리의 융합을 이용한 개선된 얼굴검출)

  • Park, Seong-Chun;Koo, Ja-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.67-73
    • /
    • 2010
  • In this paper, a new face detection method that is more accurate than the conventional methods is proposed. This method utilizes a hybrid of Hausdorff distance based on the geometric similarity between the two sets of points and the LBP distance based on the distribution of local micro texture of an image. The parameters for normalization and the optimal blending factor of the two different metrics were calculated from training sample images. Popularly used face database was used to show that the proposed method is more effective and robust to the variation of the pose, illumination, and back ground than the methods based on the Hausdorff distance or LBP distance. In the particular case, the average error distance between the detected and the true face location was reduced to 47.9% of the result of LBP method, and 22.8% of the result of Hausdorff method.

Robust Vehicle Occupant Detection based on RGB-Depth-Thermal Camera (다양한 환경에서 강건한 RGB-Depth-Thermal 카메라 기반의 차량 탑승자 점유 검출)

  • Song, Changho;Kim, Seung-Hun
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.31-37
    • /
    • 2018
  • Recently, the safety in vehicle also has become a hot topic as self-driving car is developed. In passive safety systems such as airbags and seat belts, the system is being changed into an active system that actively grasps the status and behavior of the passengers including the driver to mitigate the risk. Furthermore, it is expected that it will be possible to provide customized services such as seat deformation, air conditioning operation and D.W.D (Distraction While Driving) warning suitable for the passenger by using occupant information. In this paper, we propose robust vehicle occupant detection algorithm based on RGB-Depth-Thermal camera for obtaining the passengers information. The RGB-Depth-Thermal camera sensor system was configured to be robust against various environment. Also, one of the deep learning algorithms, OpenPose, was used for occupant detection. This algorithm is advantageous not only for RGB image but also for thermal image even using existing learned model. The algorithm will be supplemented to acquire high level information such as passenger attitude detection and face recognition mentioned in the introduction and provide customized active convenience service.

Impact of anthropogenic activities on the accumulation of heavy metals in water, sediments and some commercially important fish of the Padma River, Bangladesh

  • M Golam Mortuza
    • Fisheries and Aquatic Sciences
    • /
    • v.27 no.2
    • /
    • pp.66-75
    • /
    • 2024
  • Heavy metals are naturally found in the ecosystem, and their presence in the freshwater river is increasing through anthropogenic activities which pose a threat to living beings. In this study, heavy metal concentrations (Zn, Mn, Cu, Cd, Cu, Cr, Pb, and Ni) in different organs (muscle, skin, and gill) of fish from the Padma River were evaluated to quantify, and compare the contamination levels and related human health risks. The results revealed that the heavy metal concentrations in the water, surface sediments, and fish taken from the Padma River were far below the WHO/USEPA's permitted limits. The estimated daily intake (EDI) value in muscle was less than the tolerable daily intake (TDI). The target hazard quotient (THQ) and hazard indexes (HI) were less than 1, showing that consumers face no non-carcinogenic risk (CR). CR values of Cu, Cd, Cr, Pb, and Ni ranged from 4.00 × 10-8 to 6.35 × 10-6, less than 10-4, and total carcinogenic risk (CRt) values ranged from 9.85 × 10-6 to 1.10 × 10-5, indicating some pose a CR from consumption of those fish from the Padma River. To establish a more accurate risk assessment, numerous exposure routes, including inhalation and cutaneous exposure, should be explored.

The gaze cueing effect depending on the orientations of the face and its background (얼굴과 배경의 방향에 따른 시선 단서 효과)

  • Lijeong, Hong;Min-Shik, Kim
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.2
    • /
    • pp.85-110
    • /
    • 2023
  • The gaze cueing effect appears as detecting a target rapidly and accurately when the direction of others' gaze corresponds with the location of the visual target. The gaze cue can be affected by the orientation of the face. The gaze cueing effect is strong when the face is presented upright, but the effect has only been observed in some studies when the face is presented inverted(e.g., Tipples, 2005). This study aimed to examine whether the gaze can operate as a cue to guide attention with upright faces, and to add variables that can affect the gaze cue, such as the orientation of the face, the orientation of the background, and a time interval between the gaze cue and the target(SOA). Furthermore, it systematically manipulated these variables to explore whether the gaze cueing effect can be observed under the various conditions. The results showed a significant gaze cueing effect even on the inverted face, contrasting with previous studies. These findings were consistently observed when the background stimulus was absent(Experiment 1) and present(Experiments 2 and 3). However, there was no significant interaction in the orientations between the face and the background. Moreover, in the short SOA(150 ms), we found a significant gaze cueing effect in conditions of every face and background orientation, whereas there was no significant gaze cueing effect in the long SOA(1000 ms). By presenting a consistent observation of the gaze cueing effect under the short SOA(150ms) even in the inverted faces, the results of this study pose questions about the reliability and repeatability of previous studies that did not report significant results of gaze cueing effects in that faces. Furthermore, our results are meaningful in providing additional evidence that attention can be guided toward the direction of the gaze even in various directions of the face and background.