• Title/Summary/Keyword: Image Feature

Search Result 3,584, Processing Time 0.033 seconds

A System of Audio Data Analysis and Masking Personal Information Using Audio Partitioning and Artificial Intelligence API (오디오 데이터 내 개인 신상 정보 검출과 마스킹을 위한 인공지능 API의 활용 및 음성 분할 방법의 연구)

  • Kim, TaeYoung;Hong, Ji Won;Kim, Do Hee;Kim, Hyung-Jong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.895-907
    • /
    • 2020
  • With the recent increasing influence of multimedia content other than the text-based content, services that help to process information in content brings us great convenience. These services' representative features are searching and masking the sensitive data. It is not difficult to find the solutions that provide searching and masking function for text information and image. However, even though we recognize the necessity of the technology for searching and masking a part of the audio data, it is not easy to find the solution because of the difficulty of the technology. In this study, we propose web application that provides searching and masking functions for audio data using audio partitioning method. While we are achieving the research goal, we evaluated several speech to text conversion APIs to choose a proper API for our purpose and developed regular expressions for searching sensitive information. Lastly we evaluated the accuracy of the developed searching and masking feature. The contribution of this work is in design and implementation of searching and masking a sensitive information from the audio data by the various functionality proving experiments.

A Research for the pattern of the Instrument Panel Design of passenger cars (승용차 인스트루먼트 패널 디자인 유형의 연구)

  • Koo, Sang
    • Archives of design research
    • /
    • v.12 no.4
    • /
    • pp.99-108
    • /
    • 1999
  • The interior space in a passenger car is consisted with many partial elements, and the instrument panel is the most important part from all of them, which is designate the total image of the interior design and the space variation, drivability and safety of the interior space. ] The instrument panel of a passenger car in the early age had the concept of a wall between the engine room and the passenger cabin on which the instrument for the driver were fitted. Therefore the central mounting of the instruments was the typical feature regardless of the position of a driver seat. As the automobiles became more functional with many equipments, driver oriented instrument panel with energy absorbing materials had been developed, and that was the beginning of the various instrument panel design of these days. The recent instrument panels of passenger car have the tendency of going back to the central instrument mounting as it was at the past on a few cars for the strict safety regulation, a new production technology and for the enhanced drivability. It can be summarized into a few results as these with the analysis of a few recent instrument panels. -minimizing the total volume for the better frontal visibility. -energy absorbing and passive structures for the strict impact regulations. -revival of central instrument mounting for the convenience and safety through minimizing the difference of the focal length of a driver.

  • PDF

Gaze Detection System using Real-time Active Vision Camera (실시간 능동 비전 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1228-1238
    • /
    • 2003
  • This paper presents a new and practical method based on computer vision for detecting the monitor position where the user is looking. In general, the user tends to move both his face and eyes in order to gaze at certain monitor position. Previous researches use only one wide view camera, which can capture a whole user's face. In such a case, the image resolution is too low and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with dual camera systems(a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow view camera has the functionalities of auto focusing and auto panning/tilting based on the detected 3D facial feature positions from the wide view camera. In addition, we use dual R-LED illuminators in order to detect facial features and especially eye features. As experimental results, we can implement the real-time gaze detection system and the gaze position accuracy between the computed positions and the real ones is about 3.44 cm of RMS error.

A Robust Hand Recognition Method to Variations in Lighting (조명 변화에 안정적인 손 형태 인지 기술)

  • Choi, Yoo-Joo;Lee, Je-Sung;You, Hyo-Sun;Lee, Jung-Won;Cho, We-Duke
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.25-36
    • /
    • 2008
  • In this paper, we present a robust hand recognition approach to sudden illumination changes. The proposed approach constructs a background model with respect to hue and hue gradient in HSI color space and extracts a foreground hand region from an input image using the background subtraction method. Eighteen features are defined for a hand pose and multi-class SVM(Support Vector Machine) approach is applied to learn and classify hand poses based on eighteen features. The proposed approach robustly extracts the contour of a hand with variations in illumination by applying the hue gradient into the background subtraction. A hand pose is defined by two Eigen values which are normalized by the size of OBB(Object-Oriented Bounding Box), and sixteen feature values which represent the number of hand contour points included in each subrange of OBB. We compared the RGB-based background subtraction, hue-based background subtraction and the proposed approach with sudden illumination changes and proved the robustness of the proposed approach. In the experiment, we built a hand pose training model from 2,700 sample hand images of six subjects which represent nine numerical numbers from one to nine. Our implementation result shows 92.6% of successful recognition rate for 1,620 hand images with various lighting condition using the training model.

Grading meat quality of Hanwoo based on SFTA and AdaBoost (SFTA와 AdaBoost 기반 한우의 육질 등급 분석)

  • Cho, Hyunhak;Kim, Eun Kyeong;Jang, Eunseok;Kim, Kwang Baek;Kim, Sungshin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.433-438
    • /
    • 2016
  • This paper proposes a grade prediction method to measure meat quality in Hanwoo (Korean Native Cattle) using classification and feature extraction algorithms. The applied classification algorithm is an AdaBoost and the texture features of the given ultrasound images are extracted using SFTA. In this paper, as an initial phase, we selected ultrasound images of Hanwoo for verifying experimental results; however, we ultimately aimed to develop a diagnostic decision support system for human body scan using ultrasound images. The advantages of using ultrasound images of Hanwoo are: accurate grade prediction without butchery, optimizing shipping and feeding schedule and economic benefits. Researches on grade prediction using biometric data such as ultrasound images have been studied in countries like USA, Japan, and Korea. Studies have been based on accurate prediction method of different images obtained from different machines. However, the prediction accuracy is low. Therefore, we proposed a prediction method of meat quality. From the experimental results compared with that of the real grades, the experimental results demonstrated that the proposed method is superior to the other methods.

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

Improved Skin Color Extraction Based on Flood Fill for Face Detection (얼굴 검출을 위한 Flood Fill 기반의 개선된 피부색 추출기법)

  • Lee, Dong Woo;Lee, Sang Hun;Han, Hyun Ho;Chae, Gyoo Soo
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.6
    • /
    • pp.7-14
    • /
    • 2019
  • In this paper, we propose a Cascade Classifier face detection method using the Haar-like feature, which is complemented by the Flood Fill algorithm for lossy areas due to illumination and shadow in YCbCr color space extraction. The Cascade Classifier using Haar-like features can generate noise and loss regions due to lighting, shadow, etc. because skin color extraction using existing YCbCr color space in image only uses threshold value. In order to solve this problem, noise is removed by erosion and expansion calculation, and the loss region is estimated by using the Flood Fill algorithm to estimate the loss region. A threshold value of the YCbCr color space was further allowed for the estimated area. For the remaining loss area, the color was filled in as the average value of the additional allowed areas among the areas estimated above. We extracted faces using Haar-like Cascade Classifier. The accuracy of the proposed method is improved by about 4% and the detection rate of the proposed method is improved by about 2% than that of the Haar-like Cascade Classifier by using only the YCbCr color space.

Effect on self-enhancement of deep-learning inference by repeated training of false detection cases in tunnel accident image detection (터널 내 돌발상황 오탐지 영상의 반복 학습을 통한 딥러닝 추론 성능의 자가 성장 효과)

  • Lee, Kyu Beom;Shin, Hyu Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.3
    • /
    • pp.419-432
    • /
    • 2019
  • Most of deep learning model training was proceeded by supervised learning, which is to train labeling data composed by inputs and corresponding outputs. Labeling data was directly generated manually, so labeling accuracy of data is relatively high. However, it requires heavy efforts in securing data because of cost and time. Additionally, the main goal of supervised learning is to improve detection performance for 'True Positive' data but not to reduce occurrence of 'False Positive' data. In this paper, the occurrence of unpredictable 'False Positive' appears by trained modes with labeling data and 'True Positive' data in monitoring of deep learning-based CCTV accident detection system, which is under operation at a tunnel monitoring center. Those types of 'False Positive' to 'fire' or 'person' objects were frequently taking place for lights of working vehicle, reflecting sunlight at tunnel entrance, long black feature which occurs to the part of lane or car, etc. To solve this problem, a deep learning model was developed by simultaneously training the 'False Positive' data generated in the field and the labeling data. As a result, in comparison with the model that was trained only by the existing labeling data, the re-inference performance with respect to the labeling data was improved. In addition, re-inference of the 'False Positive' data shows that the number of 'False Positive' for the persons were more reduced in case of training model including many 'False Positive' data. By training of the 'False Positive' data, the capability of field application of the deep learning model was improved automatically.

A Design and Implementation of Fitness Application Based on Kinect Sensor

  • Lee, Won Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.43-50
    • /
    • 2021
  • In this paper, we design and implement KITNESS, a windows application that feeds back the accuracy of fitness motions based on Kinect sensors. The feature of this application is to use Kinect's camera and joint recognition sensor to give feedback to the user to exercise in the correct fitness position. At this time, the distance between the user and the Kinect is measured using Kinect's IR Emitter and IR Depth Sensor, and the joint, which is the user's joint position, and the Skeleton data of each joint are measured. Using this data, a certain distance is calculated for each joint position and posture of the user, and the accuracy of the posture is determined. And it is implemented so that users can check their posture through Kinect's RGB camera. That is, if the user's posture is correct, the skeleton information is displayed as a green line, and if it is not correct, the inaccurate part is displayed as a red line to inform intuitively. Through this application, the user receives feedback on the accuracy of the exercise position, so he can exercise himself in the correct position. This application classifies the exercise area into three areas: neck, waist, and leg, and increases the recognition rate of Kinect by excluding positions that Kinect does not recognize due to overlapping joints in the position of each exercise area. And at the end of the application, the last exercise is shown as an image for 5 seconds to inspire a sense of accomplishment and to continuously exercise.

Object Detection on the Road Environment Using Attention Module-based Lightweight Mask R-CNN (주의 모듈 기반 Mask R-CNN 경량화 모델을 이용한 도로 환경 내 객체 검출 방법)

  • Song, Minsoo;Kim, Wonjun;Jang, Rae-Young;Lee, Ryong;Park, Min-Woo;Lee, Sang-Hwan;Choi, Myung-seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.944-953
    • /
    • 2020
  • Object detection plays a crucial role in a self-driving system. With the advances of image recognition based on deep convolutional neural networks, researches on object detection have been actively explored. In this paper, we proposed a lightweight model of the mask R-CNN, which has been most widely used for object detection, to efficiently predict location and shape of various objects on the road environment. Furthermore, feature maps are adaptively re-calibrated to improve the detection performance by applying an attention module to the neural network layer that plays different roles within the mask R-CNN. Various experimental results for real driving scenes demonstrate that the proposed method is able to maintain the high detection performance with significantly reduced network parameters.