• Title/Summary/Keyword: Activity Recognition

Search Result 785, Processing Time 0.027 seconds

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Real-Time Physical Activity Recognition Using Tri-axis Accelerometer of Smart Phone (스마트 폰의 3축 가속도 센서를 이용한 실시간 물리적 동작 인식 기법)

  • Yang, Hye Kyung;Yong, H.S.
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.4
    • /
    • pp.506-513
    • /
    • 2014
  • In recent years, research on user's activity recognition using a smart phone has attracted a lot of attentions. A smart phone has various sensors, such as camera, GPS, accelerometer, audio, etc. In addition, smart phones are carried by many people throughout the day. Therefore, we can collect log data from smart phone sensors. The log data can be used to analyze user activities. This paper proposes an approach to inferring a user's physical activities based on the tri-axis accelerometer of smart phone. We propose recognition method for four activity which is physical activity; sitting, standing, walking, running. We have to convert accelerometer raw data so that we can extract features to categorize activities. This paper introduces a recognition method that is able to high detection accuracy for physical activity modes. Using the method, we developed an application system to recognize the user's physical activity mode in real-time. As a result, we obtained accuracy of over 80%.

Human Activity Recognition using an Image Sensor and a 3-axis Accelerometer Sensor (이미지 센서와 3축 가속도 센서를 이용한 인간 행동 인식)

  • Nam, Yun-Young;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.129-141
    • /
    • 2010
  • In this paper, we present a wearable intelligent device based on multi-sensor for monitoring human activity. In order to recognize multiple activities, we developed activity recognition algorithms utilizing an image sensor and a 3-axis accelerometer sensor. We proposed a grid?based optical flow method and used a SVM classifier to analyze data acquired from multi-sensor. We used the direction and the magnitude of motion vectors extracted from the image sensor. We computed the correlation between axes and the magnitude of the FFT with data extracted from the 3-axis accelerometer sensor. In the experimental results, we showed that the accuracy of activity recognition based on the only image sensor, the only 3-axis accelerometer sensor, and the proposed multi-sensor method was 55.57%, 89.97%, and 89.97% respectively.

Real-time Recognition of Daily Human Activities Using A Single Tri-axial Accelerometer

  • Rubaiyeat, Husne Ara;Khan, Adil Mehmood;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.289-292
    • /
    • 2010
  • Recently human activity recognition using accelerometer has become a prominent research area in proactive computing. In this paper, we present a real-time activity recognition system using a single tri-axial accelerometer. Our system recognizes four primary daily human activities: namely walking, going upstairs, going downstairs, and sitting. The system also computes extra information from the recognized activities such as number of steps, energy expenditure, activity duration, etc. Finally, all generated information is stored in a database as daily log.

Intelligent Healthcare Service Provisioning Using Ontology with Low-Level Sensory Data

  • Khattak, Asad Masood;Pervez, Zeeshan;Lee, Sung-Young;Lee, Young-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2016-2034
    • /
    • 2011
  • Ubiquitous Healthcare (u-Healthcare) is the intelligent delivery of healthcare services to users anytime and anywhere. To provide robust healthcare services, recognition of patient daily life activities is required. Context information in combination with user real-time daily life activities can help in the provision of more personalized services, service suggestions, and changes in system behavior based on user profile for better healthcare services. In this paper, we focus on the intelligent manipulation of activities using the Context-aware Activity Manipulation Engine (CAME) core of the Human Activity Recognition Engine (HARE). The activities are recognized using video-based, wearable sensor-based, and location-based activity recognition engines. An ontology-based activity fusion with subject profile information for personalized system response is achieved. CAME receives real-time low level activities and infers higher level activities, situation analysis, personalized service suggestions, and makes appropriate decisions. A two-phase filtering technique is applied for intelligent processing of information (represented in ontology) and making appropriate decisions based on rules (incorporating expert knowledge). The experimental results for intelligent processing of activity information showed relatively better accuracy. Moreover, CAME is extended with activity filters and T-Box inference that resulted in better accuracy and response time in comparison to initial results of CAME.

A Study on Recognition and Attitudes toward the Social Service Activity of Nurses (간호사의 사회봉사활동에 대한 인식과 태도)

  • Song, Ju-Eun;Kim, Yong-Soon;Lee, Sun-Kyoung
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.13 no.2
    • /
    • pp.220-228
    • /
    • 2007
  • Purpose: The purpose of this study was to investigate recognition and attitudes about social service activity(SSA) of nurses. Method: This was a descriptive study. The data was collected from July 15 to 31, 2002 by using a self-report questionnaire consisting of general characteristics(15items), recognition(24items) and attitude(23items) about SSA. The questionnaire was sent to 711 nurses of 38 hospitals in the Gyounggi province area, and 664 questionnaires were returned. The answer rate was 93.4%. The data was analyzed by the SPSS 12.0 Win program. Result: Seventy-one percent of nurses had a SSA experience during university, but only 14.2% nurses participate in SSA now. The mean score of recognition of SSA was 3.58($\pm$0.45), and that of attitude was 3.70($\pm$0.42). The relationship between recognition and attitude had a positive correlation(r=.398, p=.000), the higher the score of recognition, the higher the score of attitude. Conclusion: From these results, to improve nurses' participation in SSA, research to investigate the barrier factors of SSA participation in spite of the high level of recognition and attitude is needed. Programs for nurses to participate in SSA and systemic management should be set up.

  • PDF

Vector space based augmented structural kinematic feature descriptor for human activity recognition in videos

  • Dharmalingam, Sowmiya;Palanisamy, Anandhakumar
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.499-510
    • /
    • 2018
  • A vector space based augmented structural kinematic (VSASK) feature descriptor is proposed for human activity recognition. An action descriptor is built by integrating the structural and kinematic properties of the actor using vector space based augmented matrix representation. Using the local or global information separately may not provide sufficient action characteristics. The proposed action descriptor combines both the local (pose) and global (position and velocity) features using augmented matrix schema and thereby increases the robustness of the descriptor. A multiclass support vector machine (SVM) is used to learn each action descriptor for the corresponding activity classification and understanding. The performance of the proposed descriptor is experimentally analyzed using the Weizmann and KTH datasets. The average recognition rate for the Weizmann and KTH datasets is 100% and 99.89%, respectively. The computational time for the proposed descriptor learning is 0.003 seconds, which is an improvement of approximately 1.4% over the existing methods.

Crowd Activity Recognition using Optical Flow Orientation Distribution

  • Kim, Jinpyung;Jang, Gyujin;Kim, Gyujin;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.2948-2963
    • /
    • 2015
  • In the field of computer vision, visual surveillance systems have recently become an important research topic. Growth in this area is being driven by both the increase in the availability of inexpensive computing devices and image sensors as well as the general inefficiency of manual surveillance and monitoring. In particular, the ultimate goal for many visual surveillance systems is to provide automatic activity recognition for events at a given site. A higher level of understanding of these activities requires certain lower-level computer vision tasks to be performed. So in this paper, we propose an intelligent activity recognition model that uses a structure learning method and a classification method. The structure learning method is provided as a K2-learning algorithm that generates Bayesian networks of causal relationships between sensors for a given activity. The statistical characteristics of the sensor values and the topological characteristics of the generated graphs are learned for each activity, and then a neural network is designed to classify the current activity according to the features extracted from the multiple sensor values that have been collected. Finally, the proposed method is implemented and tested by using PETS2013 benchmark data.

Real-time Activity and Posture Recognition with Combined Acceleration Sensor Data from Smartphone and Wearable Device (스마트폰과 웨어러블 가속도 센서를 혼합 처리한 실시간 행위 및 자세인지 기법)

  • Lee, Hosung;Lee, Sungyoung
    • Journal of KIISE:Software and Applications
    • /
    • v.41 no.8
    • /
    • pp.586-597
    • /
    • 2014
  • The next generation mobile computing technology is recently attracting attention that smartphone and wearable device imbedded with various sensors are being deployed in the world. Existing activity and posture recognition research can be divided into two different ways considering feature of one's movement. While activity recognition focuses on catching distinct pattern according to continuous movement, posture recognition focuses on sudden change of posture and body orientation. There is a lack of research constructing a system mixing two separate patterns which could be applied in real world. In this paper, we propose a method to use both smartphone and wearable device to recognize activity and posture in the same time. To use smartphone and wearable sensor data together, we designed a pre-processing method and constructed recognition model mixing signal vector magnitude and orientation pattern features of vertical and horizontal. We considered cycling, fast/slow walking and running activities, and postures such as standing, sitting, and laying down. We confirmed the performance and validity by experiment, and proved the feasibility in real world.