• Title/Summary/Keyword: Skin Color Detection

Search Result 289, Processing Time 0.033 seconds

Face Tracking Method based on Neural Oscillatory Network Using Color Information (컬러 정보를 이용한 신경 진동망 기반 얼굴추적 방법)

  • Hwang, Yong-Won;Oh, Sang-Rok;You, Bum-Jae;Lee, Ji-Yong;Park, Mig-Non;Jeong, Mun-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.2
    • /
    • pp.40-46
    • /
    • 2011
  • This paper proposes a real-time face detection and tracking system that uses neural oscillators which can be applied to access regulation system or control systems of user authentication as well as a new algorithm. We study a way to track faces using the neural oscillatory network which imitates the artificial neural net of information handing ability of human and animals, and biological movement characteristic of a singular neuron. The system that is suggested in this paper can broadly be broken into two stages of process. The first stage is the process of face extraction, which involves the acquisition of real-time RGB24bit color video delivering with the use of a cheap webcam. LEGION(Locally Excitatory Globally Inhibitory)algorithm is suggested as the face extraction method to be preceded for face tracking. The second stage is a method for face tracking by discovering the leader neuron that has the greatest connection strength amongst neighbor neuron of extracted face area. Along with the suggested method, the necessary element of face track such as stability as well as scale problem can be resolved.

Transcription Profiles of Human Cells in Response to Sodium Arsenite Exposure

  • Lee, Te-Chang;Konan Peck;Yih, Ling-Huei
    • Toxicological Research
    • /
    • v.17
    • /
    • pp.59-69
    • /
    • 2001
  • Arsenic exposure is associated with several human diseases, including cancers, atherosclerosis, hypertension, and cerebrovascular diseases. In cultured cells, arsenite, an inorganic arsenic com-pound, was demonstrated to interfere with many physiological functions, such as enhancement of oxidative stress, delay of cell cycle progression, and induction of structural and numerical changes of chromosomes. The objective of this study is to investigate the effects of arsenic exposure on gene expression profiles by colorimetric cDNA microarray technique. HFW (normal human diploid skin fibroblasts), CL3 (human lung adenocarcinoma cell line), and HaCaT (immortalized human keratinocyte cell line) were treated with 5 $\mu\textrm{M}$ or 10 $\mu\textrm{M}$ sodium arsenite for 6 or 16 h, respectively. By a dual-color detection system, the expression profile of arsenite-treated cultures was compared to that of control cultures. Several genes expressed differentially were identified on the microarray membranes. For example, MDM2, SWI/SNF, ubiquitin specific protease 4, MAP3K11, RecQ protein-like 5, and Ribosomal protein Ll0a were consistently induced in all three cell types by arsenite, whereas prohibitin, cyclin D1, nucleolar protein 1, PCNA, Nm23, and immediate early protein (ETR101) were apparently inhibited. The present results suggest that arsenite insults altered the expression of several genes participating in cellular responses to DNA damage, stress, transcription, and cell cycle arrest.

  • PDF

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

Study of body movement monitoring utilizing nano-composite strain sensors contaning Carbon nanotubes and silicone rubber

  • Azizkhani, Mohammadbagher;Kadkhodapour, Javad;Anaraki, Ali Pourkamali;Hadavand, Behzad Shirkavand;Kolahchi, Reza
    • Steel and Composite Structures
    • /
    • v.35 no.6
    • /
    • pp.779-788
    • /
    • 2020
  • Multi-Walled Carbon nanotubes (MWCNT) coupled with Silicone Rubber (SR) can represent applicable strain sensors with accessible materials, which result in good stretchability and great sensitivity. Employing these materials and given the fact that the combination of these two has been addressed in few studies, this study is trying to represent a low-cost, durable and stretchable strain sensor that can perform excellently in a high number of repeated cycles. Great stability was observed during the cyclic test after 2000 cycles. Ultrahigh sensitivity (GF>1227) along with good extensibility (ε>120%) was observed while testing the sensor at different strain rates and the various number of cycles. Further investigation is dedicated to sensor performance in the detection of human body movements. Not only the sensor performance in detecting the small strains like the vibrations on the throat was tested, but also the larger strains as observed in extension/bending of the muscle joints like knee were monitored and recorded. Bearing in mind the applicability and low-cost features, this sensor may become promising in skin-mountable devices to detect the human body motions.

Real-time Face Localization for Video Monitoring (무인 영상 감시 시스템을 위한 실시간 얼굴 영역 추출 알고리즘)

  • 주영현;이정훈;문영식
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.11
    • /
    • pp.48-56
    • /
    • 1998
  • In this paper, a moving object detection and face region extraction algorithm which can be used in video monitoring systems is presented. The proposed algorithm is composed of two stages. In the first stage, each frame of an input video sequence is analyzed using three measures which are based on image pixel difference. If the current frame contains moving objects, their skin regions are extracted using color and frame difference information in the second stage. Since the proposed algorithm does not rely on computationally expensive features like optical flow, it is well suited for real-time applications. Experimental results tested on various sequences have shown the robustness of the proposed algorithm.

  • PDF

Yawn Recognition Algorism for Prevention of Drowsy Driving (졸음운전 방지를 위한 하품 인식 알고리즘)

  • Yoon, Won-Jong;Lee, Jaesung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.447-450
    • /
    • 2013
  • This paper proposes the way to prevent drowsy driving by recognizing drivers eyes and yawn using a front camera. The method uses the Viola-Jones algorithm to detect eyes area and mouth area from detection face region. In the eyes area, it uses the Hough transform to recognize eye circle in order to distinguish drowsy driving. In the mouth area, it determines whether for the driver to yawn through a sub-window testing by applying a HSV-filter and detecting skin color of the tongue. The test result shows that the recognition rate of yawn reaches up to 90%. It is expected that the method introduced in this paper might contribute to reduce the number of drowsy driving accidents.

  • PDF

Extraction of Tongue Region using Graph and Geometric Information (그래프 및 기하 정보를 이용한 설진 영역 추출)

  • Kim, Keun-Ho;Lee, Jeon;Choi, Eun-Ji;Ryu, Hyun-Hee;Kim, Jong-Yeol
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.11
    • /
    • pp.2051-2057
    • /
    • 2007
  • In Oriental medicine, the status of a tongue is the important indicator to diagnose one's health like physiological and clinicopathological changes of inner parts of the body. The method of tongue diagnosis is not only convenient but also non-invasive and widely used in Oriental medicine. However, tongue diagnosis is affected by examination circumstances a lot like a light source, patient's posture and doctor's condition. To develop an automatic tongue diagnosis system for an objective and standardized diagnosis, segmenting a tongue is inevitable but difficult since the colors of a tongue, lips and skin in a mouth are similar. The proposed method includes preprocessing, graph-based over-segmentation, detecting positions with a local minimum over shading, detecting edge with color difference and estimating edge geometry from the probable structure of a tongue, where preprocessing performs down-sampling to reduce computation time, histogram equalization and edge enhancement. A tongue was segmented from a face image with a tongue from a digital tongue diagnosis system by the proposed method. According to three oriental medical doctors' evaluation, it produced the segmented region to include effective information and exclude a non-tongue region. It can be used to make an objective and standardized diagnosis.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Person Identification based on Clothing Feature (의상 특징 기반의 동일인 식별)

  • Choi, Yoo-Joo;Park, Sun-Mi;Cho, We-Duke;Kim, Ku-Jin
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2010
  • With the widespread use of vision-based surveillance systems, the capability for person identification is now an essential component. However, the CCTV cameras used in surveillance systems tend to produce relatively low-resolution images, making it difficult to use face recognition techniques for person identification. Therefore, an algorithm is proposed for person identification in CCTV camera images based on the clothing. Whenever a person is authenticated at the main entrance of a building, the clothing feature of that person is extracted and added to the database. Using a given image, the clothing area is detected using background subtraction and skin color detection techniques. The clothing feature vector is then composed of textural and color features of the clothing region, where the textural feature is extracted based on a local edge histogram, while the color feature is extracted using octree-based quantization of a color map. When given a query image, the person can then be identified by finding the most similar clothing feature from the database, where the Euclidean distance is used as the similarity measure. Experimental results show an 80% success rate for person identification with the proposed algorithm, and only a 43% success rate when using face recognition.

The Estimation of Hand Pose Based on Mean-Shift Tracking Using the Fusion of Color and Depth Information for Marker-less Augmented Reality (비마커 증강현실을 위한 색상 및 깊이 정보를 융합한 Mean-Shift 추적 기반 손 자세의 추정)

  • Lee, Sun-Hyoung;Hahn, Hern-Soo;Han, Young-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.7
    • /
    • pp.155-166
    • /
    • 2012
  • This paper proposes a new method of estimating the hand pose through the Mean-Shift tracking algorithm using the fusion of color and depth information for marker-less augmented reality. On marker-less augmented reality, the most of previous studies detect the hand region using the skin color from simple experimental background. Because finger features should be detected on the hand, the hand pose that can be measured from cameras is restricted considerably. However, the proposed method can easily detect the hand pose from complex background through the new Mean-Shift tracking method using the fusion of the color and depth information from 3D sensor. The proposed method of estimating the hand pose uses the gravity point and two random points on the hand without largely constraints. The proposed Mean-Shift tracking method has about 50 pixels error less than general tracking method just using color value. The augmented reality experiment of the proposed method shows results of its performance being as good as marker based one on the complex background.