• Title/Summary/Keyword: human activity recognition system

Search Result 81, Processing Time 0.031 seconds

Posture and activity monitoring using a 3-axis accelerometer (3축 가속도 센서를 이용한 자세 및 활동 모니터링)

  • Jeong, Do-Un;Chung, Wan-Young
    • Journal of Sensor Science and Technology
    • /
    • v.16 no.6
    • /
    • pp.467-474
    • /
    • 2007
  • The real-time monitoring about the activity of the human provides useful information about the activity quantity and ability. The present study implemented a small-size and low-power acceleration monitoring system for convenient monitoring of activity quantity and recognition of emergent situations such as falling during daily life. For the wireless transmission of acceleration sensor signal, we developed a wireless transmission system based on a wireless sensor network. In addition, we developed a program for storing and monitoring wirelessly transmitted signals on PC in real-time. The performance of the implemented system was evaluated by assessing the output characteristic of the system according to the change of posture, and parameters and acontext recognition algorithm were developed in order to monitor activity volume during daily life and to recognize emergent situations such as falling. In particular, recognition error in the sudden change of acceleration was minimized by the application of a falling correction algorithm

Logical Activity Recognition Model for Smart Home Environment

  • Choi, Jung-In;Lim, Sung-Ju;Yong, Hwan-Seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.67-72
    • /
    • 2015
  • Recently, studies that interact with human and things through motion recognition are increasing due to the expansion of IoT(Internet of Things). This paper proposed the system that recognizes the user's logical activity in home environment by attaching some sensors to various objects. We employ Arduino sensors and appreciate the logical activity by using the physical activitymodel that we processed in the previous researches. In this System, we can cognize the activities such as watching TV, listening music, talking, eating, cooking, sleeping and using computer. After we produce experimental data through setting virtual scenario, then the average result of recognition rate was 95% but depending on experiment sensor situation and physical activity errors the consequence could be changed. To provide the recognized results to user, we visualized diverse graphs.

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Design and Implementation of CNN-Based Human Activity Recognition System using WiFi Signals (WiFi 신호를 활용한 CNN 기반 사람 행동 인식 시스템 설계 및 구현)

  • Chung, You-shin;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.4
    • /
    • pp.299-304
    • /
    • 2021
  • Existing human activity recognition systems detect activities through devices such as wearable sensors and cameras. However, these methods require additional devices and costs, especially for cameras, which cause privacy issue. Using WiFi signals that are already installed can solve this problem. In this paper, we propose a CNN-based human activity recognition system using channel state information of WiFi signals, and present results of designing and implementing accelerated hardware structures. The system defined four possible behaviors during studying in indoor environments, and classified the channel state information of WiFi using convolutional neural network (CNN), showing and average accuracy of 91.86%. In addition, for acceleration, we present the results of an accelerated hardware structure design for fully connected layer with the highest computation volume on CNN classifiers. As a result of performance evaluation on FPGA device, it showed 4.28 times faster calculation time than software-based system.

Tempo-oriented music recommendation system based on human activity recognition using accelerometer and gyroscope data (가속도계와 자이로스코프 데이터를 사용한 인간 행동 인식 기반의 템포 지향 음악 추천 시스템)

  • Shin, Seung-Su;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.286-291
    • /
    • 2020
  • In this paper, we propose a system that recommends music through tempo-oriented music classification and sensor-based human activity recognition. The proposed method indexes music files using tempo-oriented music classification and recommends suitable music according to the recognized user's activity. For accurate music classification, a dynamic classification based on a modulation spectrum and a sequence classification based on a Mel-spectrogram are used in combination. In addition, simple accelerometer and gyroscope sensor data of the smartphone are applied to deep spiking neural networks to improve activity recognition performance. Finally, music recommendation is performed through a mapping table considering the relationship between the recognized activity and the indexed music file. The experimental results show that the proposed system is suitable for use in any practical mobile device with a music player.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Performance of Exercise Posture Correction System Based on Deep Learning (딥러닝 기반 운동 자세 교정 시스템의 성능)

  • Hwang, Byungsun;Kim, Jeongho;Lee, Ye-Ram;Kyeong, Chanuk;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.177-183
    • /
    • 2022
  • Recently, interesting of home training is getting bigger due to COVID-19. Accordingly, research on applying HAR(human activity recognition) technology to home training has been conducted. However, existing paper of HAR proposed static activity instead of dynamic activity. In this paper, the deep learning model where dynamic exercise posture can be analyzed and the accuracy of the user's exercise posture can be shown is proposed. Fitness images of AI-hub are analyzed by blaze pose. The experiment is compared with three types of deep learning model: RNN(recurrent neural network), LSTM(long short-term memory), CNN(convolution neural network). In simulation results, it was shown that the f1-score of RNN, LSTM and CNN is 0.49, 0.87 and 0.98, respectively. It was confirmed that CNN is more suitable for human activity recognition than other models from simulation results. More exercise postures can be analyzed using a variety learning data.

Dynamic Human Activity Recognition Based on Improved FNN Model

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.417-424
    • /
    • 2012
  • In this paper, we propose an automatic system that recognizes dynamic human gestures activity, including Arabic numbers from 0 to 9. We assume the gesture trajectory is almost in a plane that called principal gesture plane, then the Least Squares Method is used to estimate the plane and project the 3-D trajectory model onto the principal. An improved FNN model combined with HMM is proposed for dynamic gesture recognition, which combines ability of HMM model for temporal data modeling with that of fuzzy neural network. The proposed algorithm shows that satisfactory performance and high recognition rate.

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

Intelligent Healthcare Service Provisioning Using Ontology with Low-Level Sensory Data

  • Khattak, Asad Masood;Pervez, Zeeshan;Lee, Sung-Young;Lee, Young-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2016-2034
    • /
    • 2011
  • Ubiquitous Healthcare (u-Healthcare) is the intelligent delivery of healthcare services to users anytime and anywhere. To provide robust healthcare services, recognition of patient daily life activities is required. Context information in combination with user real-time daily life activities can help in the provision of more personalized services, service suggestions, and changes in system behavior based on user profile for better healthcare services. In this paper, we focus on the intelligent manipulation of activities using the Context-aware Activity Manipulation Engine (CAME) core of the Human Activity Recognition Engine (HARE). The activities are recognized using video-based, wearable sensor-based, and location-based activity recognition engines. An ontology-based activity fusion with subject profile information for personalized system response is achieved. CAME receives real-time low level activities and infers higher level activities, situation analysis, personalized service suggestions, and makes appropriate decisions. A two-phase filtering technique is applied for intelligent processing of information (represented in ontology) and making appropriate decisions based on rules (incorporating expert knowledge). The experimental results for intelligent processing of activity information showed relatively better accuracy. Moreover, CAME is extended with activity filters and T-Box inference that resulted in better accuracy and response time in comparison to initial results of CAME.