• Title/Summary/Keyword: Recognition devices

Search Result 637, Processing Time 0.027 seconds

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

Statistical Model-Based Noise Reduction Approach for Car Interior Applications to Speech Recognition

  • Lee, Sung-Joo;Kang, Byung-Ok;Jung, Ho-Young;Lee, Yun-Keun;Kim, Hyung-Soon
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.801-809
    • /
    • 2010
  • This paper presents a statistical model-based noise suppression approach for voice recognition in a car environment. In order to alleviate the spectral whitening and signal distortion problem in the traditional decision-directed Wiener filter, we combine a decision-directed method with an original spectrum reconstruction method and develop a new two-stage noise reduction filter estimation scheme. When a tradeoff between the performance and computational efficiency under resource-constrained automotive devices is considered, ETSI standard advance distributed speech recognition font-end (ETSI-AFE) can be an effective solution, and ETSI-AFE is also based on the decision-directed Wiener filter. Thus, a series of voice recognition and computational complexity tests are conducted by comparing the proposed approach with ETSI-AFE. The experimental results show that the proposed approach is superior to the conventional method in terms of speech recognition accuracy, while the computational cost and frame latency are significantly reduced.

A Personal Prescription Management System Employing Optical Character Recognition Technique (OCR 기반의 개인 처방전 관리 시스템)

  • Kim, Jae-wan;Kim, Sang-tae;Yoon, Jun-yong;Joo, Yang-Ick
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2423-2428
    • /
    • 2015
  • We have implemented a personal prescription management system which enables resource-limited mobile device to utilize the optical character recognition technique. The system enables us to automatically detect and recognize the text in the personal prescription by using a optical character recognition technique. We improved the recognition rate over a pre-processing in order to improve the character recognition rate of the original method. The examples such as a personal prescription management service, alarm service, and drug information service with mobile devices have been demonstrated by using the our system.

Adaptive Character Segmentation to Improve Text Recognition Accuracy on Mobile Phones (모바일 시스템에서 텍스트 인식 위한 적응적 문자 분할)

  • Kim, Jeong Sik;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Do, Luu Ngoc;Kim, Sun Hee
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.59-71
    • /
    • 2012
  • Since mobile phones are used as common communication devices, their applications are increasingly important to human's life. Using smart-phones camera to collect daily life environment's information is one of targets for many applications such as text recognition, object recognition or context awareness. Studies have been conducted to provide important information through the recognition of texts, which are artificially or naturally included in images and movies acquired from mobile phones. In this study, a character segmentation method that improves character-recognition accuracy in images obtained from mobile phone cameras is proposed. The proposed method first classifies texts in a given image to printed letters and handwritten letters since segmentation approaches for them are different. For printed letters, rough segmentation process is conducted, then the segmented regions are integrated, deleted, and re-segmented. Segmentation for the handwritten letters is performed after skews are corrected and the characters are classified by integrating them. The experimental result shows our method achieves a successful performance for both printed and handwritten letters as 95.9% and 84.7%, respectively.

  • PDF

Real-Time Object Recognition for Children Education Applications based on Augmented Reality (증강현실 기반 아동 학습 어플리케이션을 위한 실시간 영상 인식)

  • Park, Kang-Kyu;Yi, Kang
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.17-31
    • /
    • 2017
  • The aim of the paper is to present an object recognition method toward augmented reality system that utilizes existing education instruments that was designed without any consideration on image processing and recognition. The light reflection, sizes, shapes, and color range of the existing target education instruments are major hurdles to our object recognition. In addition, the real-time performance requirements on embedded devices and user experience constraints for children users are quite challenging issues to be solved for our image processing and object recognition approach. In order to meet these requirements we employed a method cascading light-weight weak classification methods that are complimentary each other to make a resultant complicated and highly accurate object classifier toward practically reasonable precision ratio. We implemented the proposed method and tested the performance by video with more than 11,700 frames of actual playing scenario. The experimental result showed 0.54% miss ratio and 1.35% false hit ratio.

Real time instruction classification system

  • Sang-Hoon Lee;Dong-Jin Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.212-220
    • /
    • 2024
  • A recently the advancement of society, AI technology has made significant strides, especially in the fields of computer vision and voice recognition. This study introduces a system that leverages these technologies to recognize users through a camera and relay commands within a vehicle based on voice commands. The system uses the YOLO (You Only Look Once) machine learning algorithm, widely used for object and entity recognition, to identify specific users. For voice command recognition, a machine learning model based on spectrogram voice analysis is employed to identify specific commands. This design aims to enhance security and convenience by preventing unauthorized access to vehicles and IoT devices by anyone other than registered users. We converts camera input data into YOLO system inputs to determine if it is a person, Additionally, it collects voice data through a microphone embedded in the device or computer, converting it into time-domain spectrogram data to be used as input for the voice recognition machine learning system. The input camera image data and voice data undergo inference tasks through pre-trained models, enabling the recognition of simple commands within a limited space based on the inference results. This study demonstrates the feasibility of constructing a device management system within a confined space that enhances security and user convenience through a simple real-time system model. Finally our work aims to provide practical solutions in various application fields, such as smart homes and autonomous vehicles.

User Motion Recognition Healthcare System Using Smart-Band (스마트밴드를 이용한 사용자 모션인식 헬스 케어 시스템 구현)

  • Park, Jin-Tae;Hwang, Hyun-Seo;Yun, Jun-Soo;Park, Gyung-Soo;Moon, Il-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.6
    • /
    • pp.619-624
    • /
    • 2014
  • Nowadays there are various smart devices and development with the development of smart phones and that can be attached to the human body wearable computing device has been in the spotlight. In this paper, we proceeded developing wearable devices in watch type which can detect user's movement and developing a system which connects the wearable devices to smart TVs, or smart phones so that users can save and manage their physical information in those devices. Health care wearable devices already existing save information by connecting their systems to smart phones. And, smart TV health applications usually include motion detecting systems using cameras. However, there is a limit when connecting smart phone systems to different devices from various companies. Also, in case of smart TV, because some devices may not have cameras, there can be a limit for users who wants to connect their devices to smart TVs. Wearable device and user information collected by using the smart phone and when it is possible to exercise and manage anywhere. This information can also be confirmed by the smart TV applications. By using this system will be able to take advantage of the study of the behavior of the future work of the user more accurately be measured in recognition technology and other devices.

Two-way Interactive Algorithms Based on Speech and Motion Recognition with Generative AI Technology (생성형 AI 기술을 적용한 음성 및 모션 인식 기반 양방향 대화형 알고리즘)

  • Dae-Sung Jang;Jong-Chan Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.2
    • /
    • pp.397-402
    • /
    • 2024
  • Speech recognition and motion recognition technologies are applied and used in various smart devices, but they are composed of simple command recognition forms and are used as simple functions. Apart from simple functions for recognition data, professional command execution capabilities are required based on data learned in various fields. Research is being conducted on a system platform that provides optimal data to users using Generative AI, which is currently competing around the world, and can interact through voice recognition and motion recognition. The main technical processes designed for this study were designed using technologies such as voice and motion recognition functions, application of AI technology, and two-way communication. In this paper, two-way communication between a device and a user can be achieved by various input methods through voice recognition and motion recognition technology applied with AI technology.

A Context Recognition System for Various Food Intake using Mobile and Wearable Sensor Data (모바일 및 웨어러블 센서 데이터를 이용한 다양한 식사상황 인식 시스템)

  • Kim, Kee-Hoon;Cho, Sung-Bae
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.531-540
    • /
    • 2016
  • Development of various sensors attached to mobile and wearable devices has led to increasing recognition of current context-based service to the user. In this study, we proposed a probabilistic model for recognizing user's food intake context, which can occur in a great variety of contexts. The model uses low-level sensor data from mobile and wrist-wearable devices that can be widely available in daily life. To cope with innate complexity and fuzziness in high-level activities like food intake, a context model represents the relevant contexts systematically based on 4 components of activity theory and 5 W's, and tree-structured Bayesian network recognizes the probabilistic state. To verify the proposed method, we collected 383 minutes of data from 4 people in a week and found that the proposed method outperforms the conventional machine learning methods in accuracy (93.21%). Also, we conducted a scenario-based test and investigated the effect contribution of individual components for recognition.

User Context Recognition Based on Indoor and Outdoor Location and Development of User Interface for Visualization (실내 및 실외 위치 기반 사용자 상황인식과 시각화를 위한 사용자 인터페이스 개발)

  • Noh, Hyun-Yong;Oh, Sae-Won;Lee, Jin-Hyung;Park, Chang-Hyun;Hwang, Keum-Sung;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.84-89
    • /
    • 2009
  • Personal mobile devices such as mobile phone, PMP and MP3 player have advanced incredibly. Such advance in mobile technology ignites the research related to the life-log to understand the daily life of an user. Since life-log collected by mobile sensors can aid memory of the user, many researches have been conducted. This paper suggests a methodology for user-context recognition and visualization based on the outdoor location by GPS as well as indoor location by wireless-lan. When the GPS sensor does not work well in an indoor location, wireless-lan plays a major role in recognizing the location of an user so that the recognition of user-context become more accurate. In this paper, we have also developed the method for visualization of the life-log based on map and blog interfaces. In the experiments, subjects have collected real data with mobile devices and we have evaluated the performance of the proposed visualization and context recognition method based on the data.

  • PDF