• Title/Summary/Keyword: Human Action Recognition

Search Result 155, Processing Time 0.024 seconds

Recognizing Actions from Different Views by Topic Transfer

  • Liu, Jia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2093-2108
    • /
    • 2017
  • In this paper, we describe a novel method for recognizing human actions from different views via view knowledge transfer. Our approach is characterized by two aspects: 1) We propose a unsupervised topic transfer model (TTM) to model two view-dependent vocabularies, where the original bag of visual words (BoVW) representation can be transferred into a bag of topics (BoT) representation. The higher-level BoT features, which can be shared across views, can connect action models for different views. 2) Our features make it possible to obtain a discriminative model of action under one view and categorize actions in another view. We tested our approach on the IXMAS data set, and the results are promising, given such a simple approach. In addition, we also demonstrate a supervised topic transfer model (STTM), which can combine transfer feature learning and discriminative classifier learning into one framework.

A Study on Recognition of Dangerous Behaviors using Privacy Protection Video in Single-person Household Environments

  • Lim, ChaeHyun;Kim, Myung Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.47-54
    • /
    • 2022
  • Recently, with the development of deep learning technology, research on recognizing human behavior is in progress. In this paper, a study was conducted to recognize risky behaviors that may occur in a single-person household environment using deep learning technology. Due to the nature of single-person households, personal privacy protection is necessary. In this paper, we recognize human dangerous behavior in privacy protection video with Gaussian blur filters for privacy protection of individuals. The dangerous behavior recognition method uses the YOLOv5 model to detect and preprocess human object from video, and then uses it as an input value for the behavior recognition model to recognize dangerous behavior. The experiments used ResNet3D, I3D, and SlowFast models, and the experimental results show that the SlowFast model achieved the highest accuracy of 95.7% in privacy-protected video. Through this, it is possible to recognize human dangerous behavior in a single-person household environment while protecting individual privacy.

A Study on Mariners' Standard Behavior for Collision Avoidance (3) - Modeling of the execution process of an avoiding action based on human factors -

  • Park, Jung-Sun;Kobayashi, Hiroaki;Yea, Byeong-Deok
    • Journal of Navigation and Port Research
    • /
    • v.32 no.4
    • /
    • pp.279-285
    • /
    • 2008
  • We have proposed modeling methods of mariners' standard behavior for collision avoidance by analyzing mariners' recognition process in a previous study. As a subsequent study, the aim of this study is to build a model of mariners' execution process which is one of six processes in the condition of collision avoidance. In this study, thus, the structure of mariners' information processing on the process of taking avoiding actions is described and the relation between mariners' behavior and necessary factors in the process is analyzed. And then we have built a model of mariners' standard behavior for execution process based on the characteristics of mariners in ship-handling, which are obtained from the international collaborative research on human factors. It is tried to define the contents of execution process based on the standard behavior of mariners for collision avoidance and to formulate information processing of mariners.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

Disaster Detection Using Human Action Recognition (인간 행동 분석을 이용한 재해 발생 탐지 모델)

  • Han, Yul-Kyu;Choi, Young-Bok
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2019.05a
    • /
    • pp.433-434
    • /
    • 2019
  • 전 세계적으로 돌발적인 인공 재해로 인해 많은 인명 피해가 발생하고 있다. 이러한 재해는 발 빠른 대피가 중요하며 신속한 대피를 위해서는 재해 발생 여부를 빠르게 감지해야 한다. 본 논문에서는 공공장소에서 화재, 테러 등의 재해 발생 여부를 신속하게 파악하기 위해 재난 탐지 모델을 제안하였다. 스마트 폰에 내장되어있는 가속도 센서를 이용하여 평상시 상황과 재해 발생시 인간 행동에 대한 데이터를 얻고, 제안한 LSTM 딥러닝 모델을 통해 재난 발생을 즉시 감지 할 수 있다는 것을 확인하였다.

  • PDF

A Study Needs Perception Toward Educational Purposes of Home Economics Subject in Middle Schools (중학교 가정과 교육목표의 필요도에 대한 인식)

  • Ryu, Hwa-Rim;Chong, Young-Sook;Chae, Jung-Hyun
    • Korean Journal of Human Ecology
    • /
    • v.6 no.1
    • /
    • pp.111-127
    • /
    • 1997
  • This study was to examine home economics (HE) teachers' and the 1st-grade students' needs perception toward the purposes of HE education in middle school which has been practced since 1995 for both male and female students. This study, attempted (1) to analyze needs priority among the educational purposes of HE subject in relation to three systems of actions; (2) to compare differences between HE teachers' and students' perception concerning the degree of importance and achievement of the educational purposes of HE subject: and (3) to examine what they conceive as the problems In the current HE education. The survey was conducted with the samples of 600 1st-grade middle school students and 101 middle school HE teachers during the period of February-March 1996. The questionnaire used in this study was a modified version which had already been developed along with the 6th HE curriculum. For data analyses, SAS program was utilized to get Means and to perform both discrepancy test and t-test. The findings of this study were summarized as follows: first, with respect to each group's perception of the importance of the purposes related to three systems of action, HE teachers emphasized the importance of the purposes related to emancipatory action, while students placed more emphasis on the purposes related to technical action. Second, in terms of the degree of achievement, students had more positive perception on the degree of achievement of the purposes related to technical action than HE teachers did. Both groups marked low level of recognition on the degree of achievement of the purposes related to emancipatory action. Third, with respect to needs priority, HE teachers placed the first priority on emancipatory action, the second on technical action, and the last on communicative action: in the case of students, the first priority was on technical action, the second on communicative action, and the last on emancipatory action. In addition, the analysis of the opinions on the 6th curriculum revealed that most respondents found it necessary to secure adequate amount of classes for HE education. Also they shared the recongnition that HE curriculum should be renovated into the one which would fully appreciate the purposes of HE education from the perspective of the practical concerns of action which are distinct from the functional and technical concerns of passive learning. The findings of this study can serve as basic data for establishing the new purposes of HE education which put more emphasis on the purposes related to emancipatory action: as well as for developing an enhanced curriculum and reinforcing the identity of HE education.

  • PDF

Toward a Possibility of the Unified Model of Cognition (통합적 인지 모형의 가능성)

  • Rhee Young-Eui
    • Journal of Science and Technology Studies
    • /
    • v.1 no.2 s.2
    • /
    • pp.399-422
    • /
    • 2001
  • Models for human cognition currently discussed in cognitive science cannot be appropriate ones. The symbolic model of the traditional artificial intelligence works for reasoning and problem-solving tasks, but doesn't fit for pattern recognition such as letter/sound cognition. Connectionism shows the contrary phenomena to those of the traditional artificial intelligence. Connectionist systems has been shown to be very strong in the tasks of pattern recognition but weak in most of logical tasks. Brooks' situated action theory denies the. notion of representation which is presupposed in both the traditional artificial intelligence and connectionism and suggests a subsumption model which is based on perceptions coming from real world. However, situated action theory hasn't also been well applied to human cognition so far. In emphasizing those characteristics of models I refer those models 'left-brain model', 'right-brain model', and 'robot model' respectively. After I examine those models in terms of substantial items of cognitions- mental state, mental procedure, basic element of cognition, rule of cognition, appropriate level of analysis, architecture of cognition, I draw three arguments of embodiment. I suggest a way of unifying those existing models by examining their theoretical compatability which is found in those arguments.

  • PDF

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

Design of HCI System of Museum Guide Robot Based on Visual Communication Skill

  • Qingqing Liang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.328-336
    • /
    • 2024
  • Visual communication is widely used and enhanced in modern society, where there is an increasing demand for spirituality. Museum robots are one of many service robots that can replace humans to provide services such as display, interpretation and dialogue. For the improvement of museum guide robots, the paper proposes a human-robot interaction system based on visual communication skills. The system is based on a deep neural mesh structure and utilizes theoretical analysis of computer vision to introduce a Tiny+CBAM mesh structure in the gesture recognition component. This combines basic gestures and gesture states to design and evaluate gesture actions. The test results indicated that the improved Tiny+CBAM mesh structure could enhance the mean average precision value by 13.56% while maintaining a loss of less than 3 frames per second during static basic gesture recognition. After testing the system's dynamic gesture performance, it was found to be over 95% accurate for all items except double click. Additionally, it was 100% accurate for the action displayed on the current page.