• 제목/요약/키워드: Face Detection

검색결과 1,084건 처리시간 0.023초

컬러 정보와 피부색 모델을 이용한 피부 영역 검출 (Skin Region Extraction Using Color Information and Skin-Color Model)

  • 박성욱;박종관;박종욱
    • 전자공학회논문지 IE
    • /
    • 제45권4호
    • /
    • pp.60-67
    • /
    • 2008
  • 피부색은 자동화된 얼굴 인식을 위한 매우 중요한 정보 중의 하나이다. 본 논문에서는 컬러 정보와 피부색 모델을 이용한 피부 영역 검출 기법을 제안하였다. 제안된 방법은 적응적 조명 보정 기법을 통해 피부색 영역의 검출 성능을 개선하였고 전처리 필터를 적용하여 피부색이 아닌 영역을 먼저 제거시킴으로써 처리 속도를 향상시켰다. 또한 피부색 검출 성능이 우수한 ST 컬러 공간을 수정하여, 보다 정확한 피부색 영역을 추출할 수 있도록 하였다. 제안된 방법의 실험 결과 기존의 방법과 비교하여 보다 우수한 검출 결과를 나타냈으며, 처리 속도 또한 약 $33{\sim}48%$ 향상시킬 수 있었다.

Design of Low Cost Real-Time Audience Adaptive Digital Signage using Haar Cascade Facial Measures

  • Lee, Dongwoo;Kim, Daehyun;Lee, Junghoon;Lee, Seungyoun;Hwang, Hyunsuk;Mariappan, Vinayagam;Lee, Minwoo;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제5권1호
    • /
    • pp.51-57
    • /
    • 2017
  • Digital signage is becoming part of daily life across a wide range of visual advertisements segments market used in stations, hotels, retail stores, hotels, etc. The current digital signage system used in market is generally works on limited user interactivity with static contents. In this paper, a new approach is proposed using computer vision based dynamic audience adaptive cost-effective digital signage system. The proposed design uses the Camera attached Raspberry Pi Open source platform to employ the real-time audience interaction using computer vision algorithms to extract facial features of the audience. The real-time facial features are extracted using Haar Cascade algorithm which are used for audience gender specific rendering of dynamic digital signage content. The audience facial characterization using Haar Cascade is evaluated on the FERET database with 95% accuracy for gender classification. The proposed system, developed and evaluated with male and female audiences in real-life environments camera embedded raspberry pi with good level of accuracy.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

An Automatic Strabismus Screening Method with Corneal Light Reflex based on Image Processing

  • Huang, Xi-Lang;Kim, Chang Zoo;Choi, Seon Han
    • 한국멀티미디어학회논문지
    • /
    • 제24권5호
    • /
    • pp.642-650
    • /
    • 2021
  • Strabismus is one of the most common disease that might be associated with vision impairment. Especially in infants and children, it is critical to detect strabismus at an early age because uncorrected strabismus may go on to develop amblyopia. To this end, ophthalmologists usually perform the Hirschberg test, which observes corneal light reflex (CLR) to determine the presence and type of strabismus. However, this test is usually done manually in a hospital, which might be difficult for patients who live in a remote area with poor medical access. To address this issue, we propose an automatic strabismus screening method that calculates the CLR ratio to determine the presence of strabismus based on image processing. In particular, the method first employs a pre-trained face detection model and a 68 facial landmarks detector to extract the eye region image. The data points located in the limbus are then collected, and the least square method is applied to obtain the center coordinates of the iris. Finally, the coordinate of the reflective light point center within the iris is extracted and used to calculate the CLR ratio with the coordinate of iris edges. Experimental results with several images demonstrate that the proposed method can be a promising solution to provide strabismus screening for patients who cannot visit hospitals.

강건 주성분분석에 대한 요약 (A review on robust principal component analysis)

  • 이은주;박민규;김충락
    • 응용통계연구
    • /
    • 제35권2호
    • /
    • pp.327-333
    • /
    • 2022
  • 차원 축소를 위한 통계적 방법중에 주성분분석이 가장 널리 사용되고 있으나 주성분 분석의 여러 가지 장점에도 불구하고 이상치에 매우 민감하여 이를 강건화 하기 위한 여러 가지 방법이 제시되었다. 그 중에서도 Candès 등 (2011)과 Chandrasekaran 등 (2011)이 제안한 강건 주성분분석이 계산 가능하며 가장 효율적인 방법으로 알려져 있으며 최근 비디오 감시, 안면인식 등의 인공지능분야에 많이 사용되고 있다. 본 논문에서는 강건 주성준 분석의 개념과 최근 제안된 가장 효율적인 알고리즘을 소개한다. 아울러 실제 자료에 근거한 예제를 소개하고 향후 연구분야도 제안한다.

Using CNN- VGG 16 to detect the tennis motion tracking by information entropy and unascertained measurement theory

  • Zhong, Yongfeng;Liang, Xiaojun
    • Advances in nano research
    • /
    • 제12권2호
    • /
    • pp.223-239
    • /
    • 2022
  • Object detection has always been to pursue objects with particular properties or representations and to predict details on objects including the positions, sizes and angle of rotation in the current picture. This was a very important subject of computer vision science. While vision-based object tracking strategies for the analysis of competitive videos have been developed, it is still difficult to accurately identify and position a speedy small ball. In this study, deep learning (DP) network was developed to face these obstacles in the study of tennis motion tracking from a complex perspective to understand the performance of athletes. This research has used CNN-VGG 16 to tracking the tennis ball from broadcasting videos while their images are distorted, thin and often invisible not only to identify the image of the ball from a single frame, but also to learn patterns from consecutive frames, then VGG 16 takes images with 640 to 360 sizes to locate the ball and obtain high accuracy in public videos. VGG 16 tests 99.6%, 96.63%, and 99.5%, respectively, of accuracy. In order to avoid overfitting, 9 additional videos and a subset of the previous dataset are partly labelled for the 10-fold cross-validation. The results show that CNN-VGG 16 outperforms the standard approach by a wide margin and provides excellent ball tracking performance.

딥러닝 기반 마스크 미 착용자 검출 기술 (development of face mask detector)

  • 이한성;황찬웅;김종범;장도현;이혜진;임동주;정순기
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 추계학술대회
    • /
    • pp.270-272
    • /
    • 2020
  • 본 논문은 코로나 방역의 자동화를 위한 Deep learning 기술 적용에 대해 연구한다. 2020년에 가장 중요한 이슈 중 하나인 COVID-19와 그 방역에 대해 많은 사람들이 IT분야에서 떠오르고 있는 artificial intelligence(AI)에 주목하고 있다. COVID-19로 인해 마스크 착용이 선택이 아닌 필수가 되며, 이를 통제하기 위한 모델이 필요한 상황이다. AI, 그 중에서도 Deep learning의 Object detection 기술을 일상생활 곳곳에 존재하는 영상 장치들에 적용하여 합리적인 비용으로 방역의 실시간 자동화를 구현할 수 있다. 이번 논문에서는 인터넷에 공개되어 있는 사물인식 오픈소스를 활용하여 이를 구현하기 위한 연구를 진행하였다. 또 이를 위한 Dataset 확보에 대한 조사도 진행하였다.

  • PDF

Development of a structural inspection system with marking damage information at onsite based on an augmented reality technique

  • Junyeon Chung;Kiyoung Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • 제31권6호
    • /
    • pp.573-583
    • /
    • 2023
  • Although unmanned aerial vehicles have been used to overcome the limited accessibility of human-based visual inspection, unresolved issues still remain. Onsite inspectors face difficulty finding previously detected damage locations and tracking their status onsite. For example, an inspector still marks the damage location on a target structure with chalk or drawings while comparing the current status of existing damages to their previous status, as documented onsite. In this study, an augmented-reality-based structural inspection system with onsite damage information marking was developed to enhance the convenience of inspectors. The developed system detects structural damage, creates a holographic marker with damage information on the actual physical damage, and displays the marker onsite via an augmented reality headset. Because inspectors can view a marker with damage information in real time on the display, they can easily identify where the previous damage has occurred and whether the size of the damage is increasing. The performance of the developed system was validated through a field test, demonstrating that the system can enhance convenience by accelerating the inspector's essential tasks such as detecting damages, measuring their size, manually recording their information, and locating previous damages.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Intelligent Shoes for Detecting Blind Falls Using the Internet of Things

  • Ahmad Abusukhon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2377-2398
    • /
    • 2023
  • In our daily lives, we engage in a variety of tasks that rely on our senses, such as seeing. Blindness is the absence of the sense of vision. According to the World Health Organization, 2.2 billion people worldwide suffer from various forms of vision impairment. Unfortunately, blind people face a variety of indoor and outdoor challenges on a daily basis, limiting their mobility and preventing them from engaging in other activities. Blind people are very vulnerable to a variety of hazards, including falls. Various barriers, such as stairs, can cause a fall. The Internet of Things (IoT) is used to track falls and send a warning message to the blind caretakers. One of the gaps in the previous works is that they were unable to differentiate between falls true and false. Treating false falls as true falls results in many false alarms being sent to the blind caretakers and thus, they may reject the IoT system. As a means of bridging this chasm, this paper proposes an intelligent shoe that is able to precisely distinguish between false and true falls based on three sensors, namely, the load scale sensor, the light sensor, and the Flex sensor. The proposed IoT system is tested in an indoor environment for various scenarios of falls using four models of machine learning. The results from our system showed an accuracy of 0.96%. Compared to the state-of-the-art, our system is simpler and more accurate since it avoids sending false alarms to the blind caretakers.