• 제목/요약/키워드: Action detection

검색결과 333건 처리시간 0.029초

사람 행동 인식에서 반복 감소를 위한 저수준 사람 행동 변화 감지 방법 (Detection of Low-Level Human Action Change for Reducing Repetitive Tasks in Human Action Recognition)

  • 노요환;김민정;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제22권4호
    • /
    • pp.432-442
    • /
    • 2019
  • Most current human action recognition methods based on deep learning methods. It is required, however, a very high computational cost. In this paper, we propose an action change detection method to reduce repetitive human action recognition tasks. In reality, simple actions are often repeated and it is time consuming process to apply high cost action recognition methods on repeated actions. The proposed method decides whether action has changed. The action recognition is executed only when it has detected action change. The action change detection process is as follows. First, extract the number of non-zero pixel from motion history image and generate one-dimensional time-series data. Second, detecting action change by comparison of difference between current time trend and local extremum of time-series data and threshold. Experiments on the proposed method achieved 89% balanced accuracy on action change data and 61% reduced action recognition repetition.

온라인 행동 탐지 기술 동향 (Trends in Online Action Detection in Streaming Videos)

  • 문진영;김형일;이용주
    • 전자통신동향분석
    • /
    • 제36권2호
    • /
    • pp.75-82
    • /
    • 2021
  • Online action detection (OAD) in a streaming video is an attractive research area that has aroused interest lately. Although most studies for action understanding have considered action recognition in well-trimmed videos and offline temporal action detection in untrimmed videos, online action detection methods are required to monitor action occurrences in streaming videos. OAD predicts action probabilities for a current frame or frame sequence using a fixed-sized video segment, including past and current frames. In this article, we discuss deep learning-based OAD models. In addition, we investigated OAD evaluation methodologies, including benchmark datasets and performance measures, and compared the performances of the presented OAD models.

시간적 행동 탐지 기술 동향 (Trends in Temporal Action Detection in Untrimmed Videos)

  • 문진영;김형일;박종열
    • 전자통신동향분석
    • /
    • 제35권3호
    • /
    • pp.20-33
    • /
    • 2020
  • Temporal action detection (TAD) in untrimmed videos is an important but a challenging problem in the field of computer vision and has gathered increasing interest recently. Although most studies on action in videos have addressed action recognition in trimmed videos, TAD methods are required to understand real-world untrimmed videos, including mostly background and some meaningful action instances belonging to multiple action classes. TAD is mainly composed of temporal action localization that generates temporal action proposals, such as single action and action recognition, which classifies action proposals into action classes. However, the task of generating temporal action proposals with accurate temporal boundaries is challenging in TAD. In this paper, we discuss TAD technologies that are considered high performance in terms of representative TAD studies based on deep learning. Further, we investigate evaluation methodologies for TAD, such as benchmark datasets and performance measures, and subsequently compare the performance of the discussed TAD models.

Quick and easy game bot detection based on action time interval estimation

  • Yong Goo Kang;Huy Kang Kim
    • ETRI Journal
    • /
    • 제45권4호
    • /
    • pp.713-723
    • /
    • 2023
  • Game bots are illegal programs that facilitate account growth and goods acquisition through continuous and automatic play. Early detection is required to minimize the damage caused by evolving game bots. In this study, we propose a game bot detection method based on action time intervals (ATIs). We observe the actions of the bots in a game and identify the most frequently occurring actions. We extract the frequency, ATI average, and ATI standard deviation for each identified action, which is to used as machine learning features. Furthermore, we measure the performance using actual logs of the Aion game to verify the validity of the proposed method. The accuracy and precision of the proposed method are 97% and 100%, respectively. Results show that the game bots can be detected early because the proposed method performs well using only data from a single day, which shows similar performance with those proposed in a previous study using the same dataset. The detection performance of the model is maintained even after 2 months of training without any revision process.

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템 (Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID)

  • 이상현;양성훈;오승진;강진범
    • 지능정보연구
    • /
    • 제28권1호
    • /
    • pp.89-106
    • /
    • 2022
  • 최근 영상 데이터의 급증으로 이를 효과적으로 처리하기 위해 객체 탐지 및 추적, 행동 인식, 표정 인식, 재식별(Re-ID)과 같은 다양한 컴퓨터비전 기술에 대한 수요도 급증했다. 그러나 객체 탐지 및 추적 기술은 객체의 영상 촬영 장소 이탈과 재등장, 오클루전(Occlusion) 등과 같이 성능을 저하시키는 많은 어려움을 안고 있다. 이에 따라 객체 탐지 및 추적 모델을 근간으로 하는 행동 및 표정 인식 모델 또한 객체별 데이터 추출에 난항을 겪는다. 또한 다양한 모델을 활용한 딥러닝 아키텍처는 병목과 최적화 부족으로 성능 저하를 겪는다. 본 연구에서는 YOLOv5기반 DeepSORT 객체추적 모델, SlowFast 기반 행동 인식 모델, Torchreid 기반 재식별 모델, 그리고 AWS Rekognition의 표정 인식 모델을 활용한 영상 분석 시스템에 단일 연결 계층적 군집화(Single-linkage Hierarchical Clustering)를 활용한 재식별(Re-ID) 기법과 GPU의 메모리 스루풋(Throughput)을 극대화하는 처리 기법을 적용한 행동 및 표정 검출용 영상 분석 시스템을 제안한다. 본 연구에서 제안한 시스템은 간단한 메트릭을 사용하는 재식별 모델의 성능보다 높은 정확도와 실시간에 가까운 처리 성능을 가지며, 객체의 영상 촬영 장소 이탈과 재등장, 오클루전 등에 의한 추적 실패를 방지하고 영상 내 객체별 행동 및 표정 인식 결과를 동일 객체에 지속적으로 연동하여 영상을 효율적으로 분석할 수 있다.

침입신호 상관성을 이용한 침입 탐지 시스템 (Intrusion Detection System Using the Correlation of Intrusion Signature)

  • 나근식
    • 인터넷정보학회논문지
    • /
    • 제5권2호
    • /
    • pp.57-67
    • /
    • 2004
  • 본 논문에서는 네트워크 침입 탐지 시스템의 성능과 탐지 정확성을 높일 수 있는 침입 탐지 시스템의 구조를 제시한다. 네트워크를 통한 침입은 보통 여러 단계의 침입 동작으로 이루어 진다. 각 침입 동작은 특정 침입 신호로 탐지할 수 있다. 그러나 침입이 아닌 보통의 동작도 침입 행위와 같은 신호를 나타낼 수 있다. 따라서 특정 침입 신호로 침입을 판단하는 것은 잘못된 판단을 내릴 수 있게 된다. 제시하는 시스템은 침입을 구성하는 각 단계의 신호들 간의 신호 상관성을 이용한다. 따라서 제시하는 시스템의 침입에 대한 판단은 높은 신뢰성을 가질 수 있다. 또한 알려진 침입에 대한 변형도 잘 탐지할 수 있다.

  • PDF

Improved DT Algorithm Based Human Action Features Detection

  • Hu, Zeyuan;Lee, Suk-Hwan;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제21권4호
    • /
    • pp.478-484
    • /
    • 2018
  • The choice of the motion features influences the result of the human action recognition method directly. Many factors often influence the single feature differently, such as appearance of the human body, environment and video camera. So the accuracy of action recognition is restricted. On the bases of studying the representation and recognition of human actions, and giving fully consideration to the advantages and disadvantages of different features, the Dense Trajectories(DT) algorithm is a very classic algorithm in the field of behavior recognition feature extraction, but there are some defects in the use of optical flow images. In this paper, we will use the improved Dense Trajectories(iDT) algorithm to optimize and extract the optical flow features in the movement of human action, then we will combined with Support Vector Machine methods to identify human behavior, and use the image in the KTH database for training and testing.

시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템 (Image Based Human Action Recognition System to Support the Blind)

  • 고병철;황민철;남재열
    • 정보과학회 논문지
    • /
    • 제42권1호
    • /
    • pp.138-143
    • /
    • 2015
  • 본 논문에서는 시각장애인의 장면인식 보조를 위해, 귀걸이 형 블루투수 카메라와 행동인식 서버간의 통신을 통해 휴먼의 행동을 인식하는 시스템을 제안한다. 먼저 시각장애인이 귀걸이 형 블루투수 카메라를 이용하여 원하는 위치의 장면을 촬영하면, 촬영된 영상은 카메라와 연동된 스마트 폰을 통해 인식서버로 전송된다. 인식 서버에서는 영상 분석 알고리즘을 이용하여 휴먼 및 객체를 검출하고 휴먼의 포즈를 분석하여 휴먼 행동을 인식한다. 인식된 휴먼 행동 정보는 스마트 폰에 재 전송되고 사용자는 스마트 폰을 통해 text-to-speech (TTS)로 인식결과를 듣게 된다. 본 논문에서 제안한 시스템에서는 실내 외에서 촬영된 실험데이터에 대해서 60.7%의 휴먼 행동 인식 성능을 보여 주었다.

Extensible Hierarchical Method of Detecting Interactive Actions for Video Understanding

  • Moon, Jinyoung;Jin, Junho;Kwon, Yongjin;Kang, Kyuchang;Park, Jongyoul;Park, Kyoung
    • ETRI Journal
    • /
    • 제39권4호
    • /
    • pp.502-513
    • /
    • 2017
  • For video understanding, namely analyzing who did what in a video, actions along with objects are primary elements. Most studies on actions have handled recognition problems for a well-trimmed video and focused on enhancing their classification performance. However, action detection, including localization as well as recognition, is required because, in general, actions intersect in time and space. In addition, most studies have not considered extensibility for a newly added action that has been previously trained. Therefore, proposed in this paper is an extensible hierarchical method for detecting generic actions, which combine object movements and spatial relations between two objects, and inherited actions, which are determined by the related objects through an ontology and rule based methodology. The hierarchical design of the method enables it to detect any interactive actions based on the spatial relations between two objects. The method using object information achieves an F-measure of 90.27%. Moreover, this paper describes the extensibility of the method for a new action contained in a video from a video domain that is different from the dataset used.