• Title/Summary/Keyword: Activity recognition

Search Result 789, Processing Time 0.029 seconds

A Comparative Study of Voice Activity Detection Algorithms in Adverse Environments (잡음 환경에서의 음성 검출 알고리즘 비교 연구)

  • Yang Kyong-Chul;Yook Dong-Suk
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.45-48
    • /
    • 2006
  • As the speech recognition systems are used in many emerging applications, robust performance of speech recognition systems under extremely noisy conditions become more important. The voice activity detection (VAD) has been taken into account as one of the important factors for robust speech recognition. In this paper, we investigate conventional VAD algorithms and analyze the weak and the strong points of each algorithm.

  • PDF

Research on different expectations on recognition of innovative activities (혁신활동별 성과 인식의 기대 차이에 관한 연구 -지속가능경영활동을 중심으로-)

  • Kim, Kwang-Soo;Choi, Sang-Hak
    • Journal of Korean Society for Quality Management
    • /
    • v.39 no.1
    • /
    • pp.109-119
    • /
    • 2011
  • This study distinguished innovation activities into 3 main things (sustainability management activity, management innovation activity and quality innovation activity). Performance differences among those 3 innovation activities are again distinguished into 4 things (affecting on sales revenue, increasing productivity, improving its corporate image and cost reduction) to demonstrate recognition of performance. We also demonstrated difference of performance between manager and other workers.

Field Test of Automated Activity Classification Using Acceleration Signals from a Wristband

  • Gong, Yue;Seo, JoonOh
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.443-452
    • /
    • 2020
  • Worker's awkward postures and unreasonable physical load can be corrected by monitoring construction activities, thereby increasing the safety and productivity of construction workers and projects. However, manual identification is time-consuming and contains high human variance. In this regard, an automated activity recognition system based on inertial measurement unit can help in rapidly and precisely collecting motion data. With the acceleration data, the machine learning algorithm will be used to train classifiers for automatically categorizing activities. However, input acceleration data are extracted either from designed experiments or simple construction work in previous studies. Thus, collected data series are discontinuous and activity categories are insufficient for real construction circumstances. This study aims to collect acceleration data during long-term continuous work in a construction project and validate the feasibility of activity recognition algorithm with the continuous motion data. The data collection covers two different workers performing formwork at the same site. An accelerator, as well as portable camera, is attached to the worker during the entire working session for simultaneously recording motion data and working activity. The supervised machine learning-based models are trained to classify activity in hierarchical levels, which reaches a 96.9% testing accuracy of recognizing rest and work and 85.6% testing accuracy of identifying stationary, traveling, and rebar installation actions.

  • PDF

Human Activity Pattern Recognition Using Motion Information and Joints of Human Body (인체의 조인트와 움직임 정보를 이용한 인간의 행동패턴 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.6
    • /
    • pp.1179-1186
    • /
    • 2012
  • In this paper, we propose an algorithm that recognizes human activity patterns using the human body's joints and the information of the joints. The proposed method extracts the object from inputted video, automatically extracts joints using the ratio of the human body, applies block-matching algorithm for each joint and gets the motion information of joints. The proposed method uses the joints to move, the directional vector of motions of joints, and the sign to represent the increase or decrease of x and y coordinates of joints as basic parameters for human recognition of activity. The proposed method was tested for 8 human activities of inputted video from a web camera and had the good result for the ration of recognition of the human activities.

A Study on Visual Perception based Emotion Recognition using Body-Activity Posture (사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.305-314
    • /
    • 2011
  • Research into the visual perception of human emotion to recognize an intention has traditionally focused on emotions of facial expression. Recently researchers have turned to the more challenging field of emotional expressions through body posture or activity. Proposed work approaches recognition of basic emotional categories from body postures using neural model applied visual perception of neurophysiology. In keeping with information processing models of the visual cortex, this work constructs a biologically plausible hierarchy of neural detectors, which can discriminate 6 basic emotional states from static views of associated body postures of activity. The proposed model, which is tolerant to parameter variations, presents its possibility by evaluating against human test subjects on a set of body postures of activities.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.

Human Activity Recognition with LSTM Using the Egocentric Coordinate System Key Points

  • Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_1
    • /
    • pp.693-698
    • /
    • 2021
  • As technology advances, there is increasing need for research in different fields where this technology is applied. On of the most researched topic in computer vision is Human activity recognition (HAR), which has widely been implemented in various fields which include healthcare, video surveillance and education. We therefore present in this paper a human activity recognition system based on scale and rotation while employing the Kinect depth sensors to obtain the human skeleton joints. In contrast to previous approaches that use joint angles, in this paper we propose that each limb has an angle with the X, Y, Z axes which we employ as feature vectors. The use of the joint angles makes our system scale invariant. We further calculate the body relative direction in the egocentric coordinates in order to provide the rotation invariance. For the system parameters, we employ 8 limbs with their corresponding angles each having the X, Y, Z axes from the coordinate system as feature vectors. The extracted features are finally trained and tested with the Long short term memory (LSTM) Network which gives us an average accuracy of 98.3%.

Kinect Sensor- based LMA Motion Recognition Model Development

  • Hong, Sung Hee
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.367-372
    • /
    • 2021
  • The purpose of this study is to suggest that the movement expression activity of intellectually disabled people is effective in the learning process of LMA motion recognition based on Kinect sensor. We performed an ICT motion recognition games for intellectually disabled based on movement learning of LMA. The characteristics of the movement through Laban's LMA include the change of time in which movement occurs through the human body that recognizes space and the tension or relaxation of emotion expression. The design and implementation of the motion recognition model will be described, and the possibility of using the proposed motion recognition model is verified through a simple experiment. As a result of the experiment, 24 movement expression activities conducted through 10 learning sessions of 5 participants showed a concordance rate of 53.4% or more of the total average. Learning motion games that appear in response to changes in motion had a good effect on positive learning emotions. As a result of study, learning motion games that appear in response to changes in motion had a good effect on positive learning emotions

Detecting User Activities with the Accelerometer on Android Smartphones

  • Wang, Xingfeng;Kim, Heecheol
    • Journal of Multimedia Information System
    • /
    • v.2 no.2
    • /
    • pp.233-240
    • /
    • 2015
  • Mobile devices are becoming increasingly sophisticated and the latest generation of smartphones now incorporates many diverse and powerful sensors. These sensors include acceleration sensor, magnetic field sensor, light sensor, proximity sensor, gyroscope sensor, pressure sensor, rotation vector sensor, gravity sensor and orientation sensor. The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper, we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity that a user is performing. To implement our system, we collected labeled accelerometer data from 10 users as they performed daily activities such as "phone detached", "idle", "walking", "running", and "jumping", and then aggregated this time series data into examples that summarize the user activity 5-minute intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users-just by having them carry cell phones in their pockets.

The Relations among ADL, Self-efficacy, Physical Activity and Cognitive Function in Korean Elders (노인의 일상생활 수행능력, 자기 효능감, 신체활동 및 인지기능의 관계)

  • Wang, Myoung-Ja
    • Research in Community and Public Health Nursing
    • /
    • v.21 no.1
    • /
    • pp.101-109
    • /
    • 2010
  • Purpose: This study was to identify the relations among ADL, self-efficacy, physical activity and cognitive function in elders. Methods: A total of 257 subjects aged between 60 and 92 were selected through convenient sampling. Data were collected with a self-reported questionnaire from November 1 to November 30, 2008. Collected data were analyzed with SPSS/WIN 15.0. Results: Differences in ADL, self-efficacy, physical activity, and cognitive functions according to general characteristics were as follows. ADL was significantly different according to age, cohabitation, recognition on health, and successful aging. Self-efficacy was significantly different according to cohabitation, recognition on health, and successful aging. Physical activity was significantly different according to age, educational level, cohabitation, and cognition on health. Cognitive function was significantly different according to age, educational level, job, and recognition on health. The correlation coefficient (r) of the ADL variables was .565 for self-efficacy, .633 for physical activity and .460 for cognitive function. Conclusion: Findings of this study may be useful in understanding the health status of community-dwelling elders and developing more specific health promotion programs.