• Title/Summary/Keyword: Feature image

Search Result 3,612, Processing Time 0.03 seconds

Gaze Detection System using Real-time Active Vision Camera (실시간 능동 비전 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1228-1238
    • /
    • 2003
  • This paper presents a new and practical method based on computer vision for detecting the monitor position where the user is looking. In general, the user tends to move both his face and eyes in order to gaze at certain monitor position. Previous researches use only one wide view camera, which can capture a whole user's face. In such a case, the image resolution is too low and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with dual camera systems(a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow view camera has the functionalities of auto focusing and auto panning/tilting based on the detected 3D facial feature positions from the wide view camera. In addition, we use dual R-LED illuminators in order to detect facial features and especially eye features. As experimental results, we can implement the real-time gaze detection system and the gaze position accuracy between the computed positions and the real ones is about 3.44 cm of RMS error.

A Robust Hand Recognition Method to Variations in Lighting (조명 변화에 안정적인 손 형태 인지 기술)

  • Choi, Yoo-Joo;Lee, Je-Sung;You, Hyo-Sun;Lee, Jung-Won;Cho, We-Duke
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.25-36
    • /
    • 2008
  • In this paper, we present a robust hand recognition approach to sudden illumination changes. The proposed approach constructs a background model with respect to hue and hue gradient in HSI color space and extracts a foreground hand region from an input image using the background subtraction method. Eighteen features are defined for a hand pose and multi-class SVM(Support Vector Machine) approach is applied to learn and classify hand poses based on eighteen features. The proposed approach robustly extracts the contour of a hand with variations in illumination by applying the hue gradient into the background subtraction. A hand pose is defined by two Eigen values which are normalized by the size of OBB(Object-Oriented Bounding Box), and sixteen feature values which represent the number of hand contour points included in each subrange of OBB. We compared the RGB-based background subtraction, hue-based background subtraction and the proposed approach with sudden illumination changes and proved the robustness of the proposed approach. In the experiment, we built a hand pose training model from 2,700 sample hand images of six subjects which represent nine numerical numbers from one to nine. Our implementation result shows 92.6% of successful recognition rate for 1,620 hand images with various lighting condition using the training model.

Grading meat quality of Hanwoo based on SFTA and AdaBoost (SFTA와 AdaBoost 기반 한우의 육질 등급 분석)

  • Cho, Hyunhak;Kim, Eun Kyeong;Jang, Eunseok;Kim, Kwang Baek;Kim, Sungshin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.433-438
    • /
    • 2016
  • This paper proposes a grade prediction method to measure meat quality in Hanwoo (Korean Native Cattle) using classification and feature extraction algorithms. The applied classification algorithm is an AdaBoost and the texture features of the given ultrasound images are extracted using SFTA. In this paper, as an initial phase, we selected ultrasound images of Hanwoo for verifying experimental results; however, we ultimately aimed to develop a diagnostic decision support system for human body scan using ultrasound images. The advantages of using ultrasound images of Hanwoo are: accurate grade prediction without butchery, optimizing shipping and feeding schedule and economic benefits. Researches on grade prediction using biometric data such as ultrasound images have been studied in countries like USA, Japan, and Korea. Studies have been based on accurate prediction method of different images obtained from different machines. However, the prediction accuracy is low. Therefore, we proposed a prediction method of meat quality. From the experimental results compared with that of the real grades, the experimental results demonstrated that the proposed method is superior to the other methods.

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

Improved Skin Color Extraction Based on Flood Fill for Face Detection (얼굴 검출을 위한 Flood Fill 기반의 개선된 피부색 추출기법)

  • Lee, Dong Woo;Lee, Sang Hun;Han, Hyun Ho;Chae, Gyoo Soo
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.6
    • /
    • pp.7-14
    • /
    • 2019
  • In this paper, we propose a Cascade Classifier face detection method using the Haar-like feature, which is complemented by the Flood Fill algorithm for lossy areas due to illumination and shadow in YCbCr color space extraction. The Cascade Classifier using Haar-like features can generate noise and loss regions due to lighting, shadow, etc. because skin color extraction using existing YCbCr color space in image only uses threshold value. In order to solve this problem, noise is removed by erosion and expansion calculation, and the loss region is estimated by using the Flood Fill algorithm to estimate the loss region. A threshold value of the YCbCr color space was further allowed for the estimated area. For the remaining loss area, the color was filled in as the average value of the additional allowed areas among the areas estimated above. We extracted faces using Haar-like Cascade Classifier. The accuracy of the proposed method is improved by about 4% and the detection rate of the proposed method is improved by about 2% than that of the Haar-like Cascade Classifier by using only the YCbCr color space.

Effect on self-enhancement of deep-learning inference by repeated training of false detection cases in tunnel accident image detection (터널 내 돌발상황 오탐지 영상의 반복 학습을 통한 딥러닝 추론 성능의 자가 성장 효과)

  • Lee, Kyu Beom;Shin, Hyu Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.3
    • /
    • pp.419-432
    • /
    • 2019
  • Most of deep learning model training was proceeded by supervised learning, which is to train labeling data composed by inputs and corresponding outputs. Labeling data was directly generated manually, so labeling accuracy of data is relatively high. However, it requires heavy efforts in securing data because of cost and time. Additionally, the main goal of supervised learning is to improve detection performance for 'True Positive' data but not to reduce occurrence of 'False Positive' data. In this paper, the occurrence of unpredictable 'False Positive' appears by trained modes with labeling data and 'True Positive' data in monitoring of deep learning-based CCTV accident detection system, which is under operation at a tunnel monitoring center. Those types of 'False Positive' to 'fire' or 'person' objects were frequently taking place for lights of working vehicle, reflecting sunlight at tunnel entrance, long black feature which occurs to the part of lane or car, etc. To solve this problem, a deep learning model was developed by simultaneously training the 'False Positive' data generated in the field and the labeling data. As a result, in comparison with the model that was trained only by the existing labeling data, the re-inference performance with respect to the labeling data was improved. In addition, re-inference of the 'False Positive' data shows that the number of 'False Positive' for the persons were more reduced in case of training model including many 'False Positive' data. By training of the 'False Positive' data, the capability of field application of the deep learning model was improved automatically.

A Design and Implementation of Fitness Application Based on Kinect Sensor

  • Lee, Won Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.43-50
    • /
    • 2021
  • In this paper, we design and implement KITNESS, a windows application that feeds back the accuracy of fitness motions based on Kinect sensors. The feature of this application is to use Kinect's camera and joint recognition sensor to give feedback to the user to exercise in the correct fitness position. At this time, the distance between the user and the Kinect is measured using Kinect's IR Emitter and IR Depth Sensor, and the joint, which is the user's joint position, and the Skeleton data of each joint are measured. Using this data, a certain distance is calculated for each joint position and posture of the user, and the accuracy of the posture is determined. And it is implemented so that users can check their posture through Kinect's RGB camera. That is, if the user's posture is correct, the skeleton information is displayed as a green line, and if it is not correct, the inaccurate part is displayed as a red line to inform intuitively. Through this application, the user receives feedback on the accuracy of the exercise position, so he can exercise himself in the correct position. This application classifies the exercise area into three areas: neck, waist, and leg, and increases the recognition rate of Kinect by excluding positions that Kinect does not recognize due to overlapping joints in the position of each exercise area. And at the end of the application, the last exercise is shown as an image for 5 seconds to inspire a sense of accomplishment and to continuously exercise.

Object Detection on the Road Environment Using Attention Module-based Lightweight Mask R-CNN (주의 모듈 기반 Mask R-CNN 경량화 모델을 이용한 도로 환경 내 객체 검출 방법)

  • Song, Minsoo;Kim, Wonjun;Jang, Rae-Young;Lee, Ryong;Park, Min-Woo;Lee, Sang-Hwan;Choi, Myung-seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.944-953
    • /
    • 2020
  • Object detection plays a crucial role in a self-driving system. With the advances of image recognition based on deep convolutional neural networks, researches on object detection have been actively explored. In this paper, we proposed a lightweight model of the mask R-CNN, which has been most widely used for object detection, to efficiently predict location and shape of various objects on the road environment. Furthermore, feature maps are adaptively re-calibrated to improve the detection performance by applying an attention module to the neural network layer that plays different roles within the mask R-CNN. Various experimental results for real driving scenes demonstrate that the proposed method is able to maintain the high detection performance with significantly reduced network parameters.

Ultrasonic Image Analysis Using GLCM in Diffuse Thyroid Disease (미만성 갑상샘 질환에서 GLCM을 이용한 초음파 영상 분석)

  • Ye, Soo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.473-479
    • /
    • 2021
  • The diagnostic criteria for diffuse thyroid disease are ambiguous and there are many errors due to the subjective diagnosis of experts. Also, studies on ultrasound imaging of thyroid nodules have been actively conducted, but studies on diffuse thyroid disease are insufficient. In this study, features were extracted by applying the GLCM algorithm to ultrasound images of normal and diffuse thyroid disease, and quantitative analysis was performed using the extracted feature values. Using the GLCM algorithm for thyroid ultrasound images of patients diagnosed at W hospital, 199 normal cases, 132 mild cases, and 99 moderate cases, a region of interest (50×50 pixel) was set for a total of 430 images, and Autocorrelation, Sum of squares, sum average, sum variance, cluster prominence, and energy were analyzed using six parameters. As a result, in autocorrelation, sum of squares, sum average, and sum variance four parameters, Normal, Mild, and Moderate were distinguished with a high recognition rate of over 90%. This study is valuable as a criterion for classifying the severity of diffuse thyroid disease in ultrasound images using the GLCM algorithm. By applying these parameters, it is expected that errors due to visual reading can be reduced in the diagnosis of thyroid disease and can be utilized as a secondary means of diagnosing diffuse thyroid disease.

A Study on Expression of the Film (2019) : Focusing on Genre-Shifting Characters and Actors' Acting (영화 <기생충>(2019)의 표현성 연구 : 장르를 변주하는 캐릭터와 배우의 연기를 중심으로)

  • Lee, A-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.6
    • /
    • pp.77-89
    • /
    • 2020
  • The film "Parasite" portrays Korea's history and its present in a space that clearly represents the real world's hierarchy as a vertical structure. It demonstrates the problems of an insurmountable reality and the elements of various conflicts occurring below the surface of Korean society through a complex mix of human emotions and relationships. The most realistic yet unrealistic characters cross boundaries between being victims and perpetrators, defamiliarizing ordinary scenes from everyday life through their small mistakes, strange obsessions, bizarre behavior, anxious psychology, and desperate struggles. This study analyzes the expression of the film "Parasite" through its characters with the belief that the film expresses director Bong Joon-ho's consistent cinematic philosophy of taking reality beyond the traditional rules of film genres. By doing so, Bong creates a feature of the expression that shifts genres as the characters' personalities amplify related behaviors, conflicts and questions, and that this is the core of the unique nuance and distinct humor of this film. In addition, the personalities of the characters interact with all the film's elements (cinematic techniques, space, props, etc.), evoking effects of various meanings, which are transmitted through the actors'images and acting. In this respect, the study analyzes how the actors were cast in order to realistically reproduce the characters of the actors, how their acting was harmonized with the film's elements, and its features as well as how they were expressed.