• Title/Summary/Keyword: User recognition

Search Result 1,344, Processing Time 0.022 seconds

Behavior recognition system based fog cloud computing

  • Lee, Seok-Woo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.6 no.3
    • /
    • pp.29-37
    • /
    • 2017
  • The current behavior recognition system don't match data formats between sensor data measured by user's sensor module or device. Therefore, it is necessary to support data processing, sharing and collaboration services between users and behavior recognition system in order to process sensor data of a large capacity, which is another formats. It is also necessary for real time interaction with users and behavior recognition system. To solve this problem, we propose fog cloud based behavior recognition system for human body sensor data processing. Fog cloud based behavior recognition system solve data standard formats in DbaaS (Database as a System) cloud by servicing fog cloud to solve heterogeneity of sensor data measured in user's sensor module or device. In addition, by placing fog cloud between users and cloud, proximity between users and servers is increased, allowing for real time interaction. Based on this, we propose behavior recognition system for user's behavior recognition and service to observers in collaborative environment. Based on the proposed system, it solves the problem of servers overload due to large sensor data and the inability of real time interaction due to non-proximity between users and servers. This shows the process of delivering behavior recognition services that are consistent and capable of real time interaction.

User Interface Design and Rehabilitation Training Methods in Hand or Arm Rehabilitation Support System (손과 팔 재활 훈련 지원 시스템에서의 사용자 인터페이스 설계와 재활 훈련 방법)

  • Ha, Jin-Young;Lee, Jun-Ho;Choi, Sun-Hwa
    • Journal of Industrial Technology
    • /
    • v.31 no.A
    • /
    • pp.63-69
    • /
    • 2011
  • A home-based rehabilitation system for patients with uncomfortable hands or arms was developed. By using this system, patients can save time and money of going to the hospital. The system's interface is easy to manipulate. In this paper, we discuss a rehabilitation system using video recognition; the focus is on designing a convenient user interface and rehabilitation training methods. The system consists of two screens: one for recording user's information and the other for training. A first-time user inputs his/her information. The system chooses the training method based on the information and records the training process automatically using video recognition. On the training screen, video clips of the training method and help messages are displayed for the user.

  • PDF

Hand Gesture Recognition using Multivariate Fuzzy Decision Tree and User Adaptation (다변량 퍼지 의사결정트리와 사용자 적응을 이용한 손동작 인식)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.81-90
    • /
    • 2008
  • While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in $KAIST^[1]$. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user's hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.

  • PDF

A Study on the Recognition of Diversity on Digital Media Services (디지털미디어서비스에 관한 시청자의 다양성 인식 실증)

  • Lee, Sang-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.237-244
    • /
    • 2018
  • This paper deals with the viewer's recognition of diversity on digital media services. For this reason, the researcher focused on the IPTV user, how the perception of the diversity of the digital media service affected the user's satisfaction. The results are as follows. First, digital media service user recognized that supply side diversity of media channel and VOD affect recognition of accessibility (easy of use) and perceived playfulness. Second user recognized that variety of performers and actors affect recognition of accessibility and perceived playfulness. Moreover, it was found that recognition of accessibility, actor's diversity, perceived playfulness affect use's recognition for media player's contribution of diversity, and media satisfaction. As in this paper, the researcher expects that this kind of methodology for confirming of user's recognition of diversity will contribute to preparing a future media policy.

A Study of Machine Learning based Face Recognition for User Authentication

  • Hong, Chung-Pyo
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.96-99
    • /
    • 2020
  • According to brilliant development of smart devices, many related services are being devised. And, almost every service is designed to provide user-centric services based on personal information. In this situation, to prevent unintentional leakage of personal information is essential. Conventionally, ID and Password system is used for the user authentication. This is a convenient method, but it has a vulnerability that can cause problems due to information leakage. To overcome these problem, many methods related to face recognition is being researched. Through this paper, we investigated the trend of user authentication through biometrics and a representative model for face recognition techniques. One is DeepFace of FaceBook and another is FaceNet of Google. Each model is based on the concept of Deep Learning and Distance Metric Learning, respectively. And also, they are based on Convolutional Neural Network (CNN) model. In the future, further research is needed on the equipment configuration requirements for practical applications and ways to provide actual personalized services.

Gaze Recognition Interface Development for Smart Wheelchair (지능형 휠체어를 위한 시선 인식 인터페이스 개발)

  • Park, S.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.5 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • In this paper, we propose a gaze recognition interface for smart wheelchair. The gaze recognition interface is a user interface which recognize the commands using the gaze recognition and avoid the detected obstacles by sensing the distance through range sensors on the way to driving. Smart wheelchair is composed of gaze recognition and tracking module, user interface module, obstacle detector, motor control module, and range sensor module. The interface in this paper uses a camera with built-in infra red filter and 2 LED light sources to see what direction the pupils turn to and can send command codes to control the system, thus it doesn't need any correction process per each person. The results of the experiment showed that the proposed interface can control the system exactly by recognizing user's gaze direction.

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

A Usability Evaluation Method for Speech Recognition Interfaces (음성인식용 인터페이스의 사용편의성 평가 방법론)

  • Han, Seong-Ho;Kim, Beom-Su
    • Journal of the Ergonomics Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.105-125
    • /
    • 1999
  • As speech is the human being's most natural communication medium, using it gives many advantages. Currently, most user interfaces of a computer are using a mouse/keyboard type but the interface using speech recognition is expected to replace them or at least be used as a tool for supporting it. Despite the advantages, the speech recognition interface is not that popular because of technical difficulties such as recognition accuracy and slow response time to name a few. Nevertheless, it is important to optimize the human-computer system performance by improving the usability. This paper presents a set of guidelines for designing speech recognition interfaces and provides a method for evaluating the usability. A total of 113 guidelines are suggested to improve the usability of speech-recognition interfaces. The evaluation method consists of four major procedures: user interface evaluation; function evaluation; vocabulary estimation; and recognition speed/accuracy evaluation. Each procedure is described along with proper techniques for efficient evaluation.

  • PDF

Implementation of Pen-Gesture Recognition System for Multimodal User Interface (멀티모달 사용자 인터페이스를 위한 펜 제스처인식기의 구현)

  • 오준택;이우범;김욱현
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.121-124
    • /
    • 2000
  • In this paper, we propose a pen gesture recognition system for user interface in multimedia terminal which requires fast processing time and high recognition rate. It is realtime and interaction system between graphic and text module. Text editing in recognition system is performed by pen gesture in graphic module or direct editing in text module, and has all 14 editing functions. The pen gesture recognition is performed by searching classification features that extracted from input strokes at pen gesture model. The pen gesture model has been constructed by classification features, ie, cross number, direction change, direction code number, position relation, distance ratio information about defined 15 types. The proposed recognition system has obtained 98% correct recognition rate and 30msec average processing time in a recognition experiment.

  • PDF