• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.033 seconds

Development of a Vision Based Fall Detection System For Healthcare (헬스케어를 위한 영상기반 기절동작 인식시스템 개발)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.279-287
    • /
    • 2006
  • This paper proposes a method to detect fall action by using stereo images to recognize emergency situation. It uses 3D information to extract the visual information for learning and testing. It uses HMM(Hidden Markov Model) as a recognition algorithm. The proposed system extracts background images from two camera images. It extracts a moving object from input video sequence by using the difference between input image and background image. After that, it finds the bounding rectangle of the moving object and extracts 3D information by using calibration data of the two cameras. We experimented to the recognition rate of fall action with the variation of rectangle width and height and that of 3D location of the rectangle center point. Experimental results show that the variation of 3D location of the center point achieves the higher recognition rate than the variation of width and height.

  • PDF

Recording Support System for Off-Line Conference using Face and Speaker Recognition (얼굴 인식 및 화자 정보를 이용한 오프라인 회의 기록 지원 시스템)

  • Son, Yun-Sik;Jung, Jin-Woo;Park, Han-Mu;Kye, Seung-Chul;Yoon, Jong-Hyuk;Jung, Nak-Chun;Oh, Se-Man
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.66-71
    • /
    • 2008
  • Recent multimedia technology has supported various application services based on the development of effective movie compression and network techniques. On-line video conference system is a typical example that use theses two technologies effectively. On-line video conference system can be characterized into an effective conferencing method for long-distanced on-line conference members. But, unfortunately, off-line conference with face-to-face meeting is more frequent than on-line conference and their support systems have not been sufficiently considered. In this paper, we propose a recording support system for off-Line conference using face and speaker recognition. This system finds the speaker in the conference by using three microphones and three webcam cameras. And analysis is done with face region information that gathered by currently active webcam camera, and recognizes the identity of face. Finally, the system tracks speaker and records conference with extract speaker information.

Performance Analysis of Exercise Gesture-Recognition Using Convolutional Block Attention Module (합성 블록 어텐션 모듈을 이용한 운동 동작 인식 성능 분석)

  • Kyeong, Chanuk;Jung, Wooyong;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.155-161
    • /
    • 2021
  • Gesture recognition analytics through a camera in real time have been widely studied in recent years. Since a small number of features from human joints are extracted, low accuracy of classifying models is get in conventional gesture recognition studies. In this paper, CBAM (Convolutional Block Attention Module) with high accuracy for classifying images is proposed as a classification model and algorithm calculating the angle of joints depending on actions is presented to solve the issues. Employing five exercise gestures images from the fitness posture images provided by AI Hub, the images are applied to the classification model. Important 8-joint angles information for classifying the exercise gestures is extracted from the images by using MediaPipe, a graph-based framework provided by Google. Setting the features as input of the classification model, the classification model is learned. From the simulation results, it is confirmed that the exercise gestures are classified with high accuracy in the proposed model.

Real-Time Vehicle License Plate Detection Based on Background Subtraction and Cascade of Boosted Classifiers

  • Sarker, Md. Mostafa Kamal;Song, Moon Kyou
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.10
    • /
    • pp.909-919
    • /
    • 2014
  • License plate (LP) detection is the most imperative part of an automatic LP recognition (LPR) system. Typical LPR contains two steps, namely LP detection (LPD) and character recognition. In this paper, we propose an efficient Vehicle-to-LP detection framework which combines with an adaptive GMM (Gaussian Mixture Model) and a cascade of boosted classifiers to make a faster vehicle LP detector. To develop a background model by using a GMM is possible in the circumstance of a fixed camera and extracts the motions using background subtraction. Firstly, an adaptive GMM is used to find the region of interest (ROI) on which motion detectors are running to detect the vehicle area as blobs ROIs. Secondly, a cascade of boosted classifiers is executed on the blobs ROIs to detect a LP. The experimental results on our test video with the resolution of $720{\times}576$ show that the LPD rate of the proposed system is 99.14% and the average computational time is approximately 42ms.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

De-interlacing and Block Code Generation For Outsole Model Recognition In Moving Picture (동영상에서 신발 밑창 모델 인식을 위한 인터레이스 제거 및 블록 코드 생성 기법)

  • Kim Cheol-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2006
  • This paper presents a method that automatically recognizes products into model type, which it flows with the conveyor belt. The specific interlaced image are occurred by moving image when we use the NTSC based camera. It is impossible to process interlaced images, so a suitable post-processing is required. For the purpose of this processing, after it remove interlaced images using de-interlacing method, it leads rectangle region of object by thresholding. And then, after rectangle region is separated into several blocks through edge detection, we calculate pixel numbers per each block, re-classify using its average, and classify products into model type. Through experiments, we know that the proposed method represent high classification ratio.

  • PDF

User Data Collection and Personalization Services in Mobile Shopping Environment (모바일 쇼핑 환경에서 사용자 데이터 수집 및 개인화 서비스 방법)

  • Kim, Sung-jin;Kim, Sung-gyu;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.560-561
    • /
    • 2018
  • The spread of smartphones is increasing the proportion of mobile shopping in the online shopping market. Most mobile shopping services are delivered through applications. However, personalization services are very important for user data collection and analysis. Therefore, in this paper, we implemented the product barcode recognition function and machine learning-based product image recognition function using smartphones camera to collect user data in mobile shopping environment. The implemented function and push notification services enabled the collection and analysis of user data and personalization services for online shopping platform applications.

  • PDF

Human Iris Recognition System using Wavelet Transform and LVQ (웨이브렛 변환과 LVQ를 이용한 홍채인식 시스템)

  • Lee, Gwan-Yong;Im, Sin-Yeong;Jo, Seong-Won
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.7
    • /
    • pp.389-398
    • /
    • 2000
  • The popular methods to check the identity of individuals include passwords and ID cards. These conventional method for user identification and authentication are not altogether reliable because they can be stolen and forgotten. As an alternative of the existing methods, biometric technology has been paid much attention for the last few decades. In this paper, we propose an efficient system for recognizing the identity of a living person by analyzing iris patterns which have a high level of stability and distinctiveness than other biometric measurements. The proposed system is based on wavelet transform and a competitive neural network with the improved mechanisms. After preprocessing the iris data acquired through a CCD camera, feature vectors are extracted by using Haar wavelet transform. LVQ(Learning Vector Quantization) is exploited to classify these feature vectors. We improve the overall performance of the proposed system by optimizing the size of feature vectors and by introducing an efficient initialization of the weight vectors and a new method for determining the winner in order to increase the recognition accuracy of LVQ. From the experiments, we confirmed that the proposed system has a great potential of being applied to real applications in an efficient and effective way.

  • PDF

Implementation for the Biometric User Identification System Based on Smart Card (SMART CARD 기반 생체인식 사용자 인증시스템의 구현)

  • 주동현;고기영;김두영
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.1
    • /
    • pp.25-31
    • /
    • 2004
  • This paper is research about the improvement of recognition rate of the biometrics user identification system using the data previously stored in the non contact Ic smart card. The proposed system identifies the user by analyzing the iris pattern his or her us. First, after extracting the area of the iris from the image of the iris of an eye which is taken by CCD camera, and then we save PCA Coefficient using GHA(Generalized Hebbian Algorithm) into the Smart Card. When we confirmed the users, we compared the imformation of the biometrics of users with that of smart card. In case two kinds of information was the same, we classified the data by using SVM(Support Vector Machine). The Experimental result showed that this system outperformed the previous developed system.

  • PDF

A Practical Solution toward SLAM in Indoor environment Based on Visual Objects and Robust Sonar Features (가정환경을 위한 실용적인 SLAM 기법 개발 : 비전 센서와 초음파 센서의 통합)

  • Ahn, Sung-Hwan;Choi, Jin-Woo;Choi, Min-Yong;Chung, Wan-Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.25-35
    • /
    • 2006
  • Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home -like environment.

  • PDF