• Title/Summary/Keyword: face tracking

Search Result 342, Processing Time 0.033 seconds

A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face (소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로)

  • Ha, Sangjip;Yi, Eun-ju;Yoo, In-jin;Park, Do-Hyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.409-414
    • /
    • 2021
  • 본 연구는 소셜로봇 디자인 연구의 흐름 중 하나인 로봇의 외형에 관하여 시선 추적을 활용하고자 한다. 소셜로봇의 몸 전체, 얼굴, 눈, 입술 등의 관심 영역으로부터 측정된 사용자의 시선 추적 지표와 디자인평가 설문을 통하여 파악된 사용자의 태도를 연결하여 소셜로봇의 디자인에 연구 모형을 구성하였다. 구체적으로 로봇에 대한 사용자의 태도를 형성하는 메커니즘을 발견하여 로봇 디자인 시 참고할 수 있는 구체적인 인사이트를 발굴하고자 하였다. 구체적으로 본 연구에서 사용된 시선 추적 지표는 고정된 시간(Fixation), 첫 응시 시간(First Visit), 전체 머문 시간(Total Viewed), 그리고 재방문 횟수(Revisits)이며, 관심 영역인 AOI(Areas of Interests)는 소셜로봇의 얼굴, 눈, 입술, 그리고 몸체로 설계하였다. 그리고 디자인평가 설문을 통하여 소셜로봇의 감정 표현, 인간다움, 얼굴 두각성 등의 소비자 신념을 수집하였고, 종속변수로 로봇에 대한 태도로 설정하였다.

  • PDF

Development of a structural inspection system with marking damage information at onsite based on an augmented reality technique

  • Junyeon Chung;Kiyoung Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • v.31 no.6
    • /
    • pp.573-583
    • /
    • 2023
  • Although unmanned aerial vehicles have been used to overcome the limited accessibility of human-based visual inspection, unresolved issues still remain. Onsite inspectors face difficulty finding previously detected damage locations and tracking their status onsite. For example, an inspector still marks the damage location on a target structure with chalk or drawings while comparing the current status of existing damages to their previous status, as documented onsite. In this study, an augmented-reality-based structural inspection system with onsite damage information marking was developed to enhance the convenience of inspectors. The developed system detects structural damage, creates a holographic marker with damage information on the actual physical damage, and displays the marker onsite via an augmented reality headset. Because inspectors can view a marker with damage information in real time on the display, they can easily identify where the previous damage has occurred and whether the size of the damage is increasing. The performance of the developed system was validated through a field test, demonstrating that the system can enhance convenience by accelerating the inspector's essential tasks such as detecting damages, measuring their size, manually recording their information, and locating previous damages.

Post-COVID-19 Syndrome: The Effect of Regret on Travelers' Dynamic Carpooling Decisions

  • Li Wang;Boya Wang;Qiang Xiao
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.239-251
    • /
    • 2024
  • Coronavirus disease 2019 (COVID-19) has severely curtailed travelers' willingness to carpool and complicated the psychological processing system of travelers' carpooling decisions. In the post-COVID-19 era, a two-stage decision model under dynamic decision scenarios is constructed by tracking the psychological states of subjects in the face of multi-scenario carpooling decisions. Through a scenario experiment method, this paper investigates how three psychological variables, travelers' psychological distance to COVID-19, anticipated regret, and experienced regret about carpooling decisions, affect their willingness to carpool and re-carpool. The results show that in the initial carpooling decision, travelers' perception gap of anticipated regret positively predicts carpooling willingness and partially mediates between psychological distance to COVID-19 and carpooling willingness; in the re-carpooling decision, travelers' perception gap of anticipated regret mediates in the process of experienced regret influencing re-carpooling willingness; the inhibitory effect of experienced regret on carpooling in the context of COVID-19 is stronger than its facilitative effect on carpooling willingness. This paper tries to offer a fact-based decision-processing system for travelers.

Trend and future prospect on the development of technology for electronic security system (기계경비시스템의 기술 변화추세와 개발전망)

  • Chung, Tae-Hwang;So, Sung-Young
    • Korean Security Journal
    • /
    • no.19
    • /
    • pp.225-244
    • /
    • 2009
  • Electronic security system is composed mainly of electronic-information-communication device, so system technology, configuration and management of the electronic security system could be affected by the change of information-communication environment. This study is to propose the future prospect on the development of technique for electronic security system through the analysis of the trend and the actual condition on the development of technique. This study is based on literature study and interview with user and provider of electronic security system, also survey was carried out by system provider and members of security integration company to come up with more practical result. Hybrid DVR technology that has multi-function such as motion detection, target tracking and image identification is expected to be developed. And 'Embedded IP camera' technology that internet server and image identification software are built in. Those technologies could change the configuration and management of CCTV system. Fingerprint identification technology and face identification technology are continually developed to get more reliability, but continual development of surveillance and three-dimension identification technology for more efficient face identification system is needed. As radio identification and tracking function of RFID is appreciated as very useful for access control system, hardware and software of RFID technology is expected to be developed, but government's support for market revitalization is necessary. Behavior pattern identification sensor technology is expected to be developed and could replace passive infrared sensor that cause system error, giving security guard firm confidence for response. The principle of behavior pattern identification is similar to image identification, so those two technology could be integrated with tracking technology and radio identification technology of RFID for total monitoring system. For more efficient electronic security system, middle-ware's role is very important to integrate the technology of electronic security system, this could make possible of installing the integrated security system.

  • PDF

Autonomous Mobile Robot System Using Adaptive Spatial Coordinates Detection Scheme based on Stereo Camera (스테레오 카메라 기반의 적응적인 공간좌표 검출 기법을 이용한 자율 이동로봇 시스템)

  • Ko Jung-Hwan;Kim Sung-Il;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.26-35
    • /
    • 2006
  • In this paper, an automatic mobile robot system for a intelligent path planning using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation. From some experiments on robot driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the distance between the mobile robot and the objects, and relative distance between the other objects is found to be very low value of $2.19\%$ and $1.52\%$ on average, respectably.

Difference in visual attention during the assessment of facial attractiveness and trustworthiness (얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이)

  • Sung, Young-Shin;Cho, Kyung-Jin;Kim, Do-Yeon;Kim, Hack-Jin
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.533-540
    • /
    • 2010
  • This study was designed to examine the difference in visual attention between the evaluations of facial attractiveness and facial trustworthiness, both of which may be the two most fundamental social evaluation for forming first impressions under various types of social interactions. In study 1, participants were asked to evaluate the attractiveness and trustworthiness of 40 new faces while their gaze directions being recorded using an eye-tracker. The analysis revealed that participants spent significantly longer gaze fixation time while examining certain facial features such as eyes and nose during the evaluation of facial trustworthiness, as compared to facial attractiveness. In study 2, participants performed the same face evaluation tasks, except that a word was briefly displayed on a certain facial feature in each face trial, which were then followed by unexpected recall tests of the previously viewed words. The analysis demonstrated that the recognition rate of the words that had been presented on the nose was significantly higher for the task of facial trustworthiness vs. facial attractiveness evaluation. These findings suggest that the evaluation of facial trustworthiness may be distinguished by that of facial attractiveness in terms of the allocation of attentional resources.

  • PDF

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

Implementation of a Task Level Pipelined Multicomputer RV860-PIPE for Computer Vision Applications (컴퓨터 비젼 응용을 위한 태스크 레벨 파이프라인 멀티컴퓨터 RV860-PIPE의 구현)

  • Lee, Choong-Hwan;Kim, Jun-Sung;Park, Kyu-Ho
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.38-48
    • /
    • 1996
  • We implemented and evaluated the preformance of a task level pipelined multicomputer "RV860-PIPE(Realtime Vision i860 system using PIPEline)" for computer vision applications. RV860-PIPE is a message-passing MIMD computer having ring interconnection network which is appropriate for vision processing. We designed the node computer of RV860-PIPE using a 64-bit microprocessor to have generality and high processing power for various vision algorithms. Furthermore, to reduce the communication overhead between node computers and between node computer and a frame grabber, we designed dedicated high speed communication channels between them. We showed the practical applicability of the implemented system by evaluting performances of various computer vision applications like edge detection, real-time moving object tracking, and real-time face recognition.

  • PDF

Respiration Rate Measurement based on Motion Compensation using Infrared Camera (열화상 카메라를 이용한 움직임 보정 기반 호흡 수 계산)

  • Kwon, Jun Hwan;Shin, Cheung Soo;Kim, Jeongmin;Oh, Kyeong Taek;Yoo, Sun Kook
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1076-1089
    • /
    • 2018
  • Respiration is the process of moving air into and out of the lung. Respiration changes the temperature in the chamber while exchanging energy. Especially the temperature of the face. Respiration monitoring using an infrared camera measures the temperature change caused by breathing. The conventional method assumes that motion is not considered and measures respiration. These assumptions can not accurately measure the respiration rate when breathing moves. In addition, the respiration rate measurement is performed by counting the number of peaks of the breathing waveform by displaying the position of the peak in a specific window, and there is a disadvantage that the breathing rate can not be measured accurately. In this paper, we use KLT tracking and block matching to calibrate limited weak movements during breathing and extract respiration waveform. In order to increase the accuracy of the respiration rate, the position of the peak used in the breath calculation is calculated by converting from a single point to a high resolution. Through this process, the respiration signal could be extracted even in weak motion, and the respiration rate could be measured robustly even in various time windows.