• 제목/요약/키워드: Video recognition

검색결과 681건 처리시간 0.025초

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권5호
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

Human Action Recognition Based on 3D Human Modeling and Cyclic HMMs

  • Ke, Shian-Ru;Thuc, Hoang Le Uyen;Hwang, Jenq-Neng;Yoo, Jang-Hee;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • 제36권4호
    • /
    • pp.662-672
    • /
    • 2014
  • Human action recognition is used in areas such as surveillance, entertainment, and healthcare. This paper proposes a system to recognize both single and continuous human actions from monocular video sequences, based on 3D human modeling and cyclic hidden Markov models (CHMMs). First, for each frame in a monocular video sequence, the 3D coordinates of joints belonging to a human object, through actions of multiple cycles, are extracted using 3D human modeling techniques. The 3D coordinates are then converted into a set of geometrical relational features (GRFs) for dimensionality reduction and discrimination increase. For further dimensionality reduction, k-means clustering is applied to the GRFs to generate clustered feature vectors. These vectors are used to train CHMMs separately for different types of actions, based on the Baum-Welch re-estimation algorithm. For recognition of continuous actions that are concatenated from several distinct types of actions, a designed graphical model is used to systematically concatenate different separately trained CHMMs. The experimental results show the effective performance of our proposed system in both single and continuous action recognition problems.

모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식 (Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio)

  • 곽내정;송특섭
    • 한국멀티미디어학회논문지
    • /
    • 제17권5호
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

HSI 색상 모델에서 색상 분할을 이용한 교통 신호등 검출과 인식 (Traffic Signal Detection and Recognition Using a Color Segmentation in a HSI Color Model)

  • 정민철
    • 반도체디스플레이기술학회지
    • /
    • 제21권4호
    • /
    • pp.92-98
    • /
    • 2022
  • This paper proposes a new method of the traffic signal detection and the recognition in an HSI color model. The proposed method firstly converts a ROI image in the RGB model to in the HSI model to segment the color of a traffic signal. Secondly, the segmented colors are dilated by the morphological processing to connect the traffic signal light and the signal light case and finally, it extracts the traffic signal light and the case by the aspect ratio using the connected component analysis. The extracted components show the detection and the recognition of the traffic signal lights. The proposed method is implemented using C language in Raspberry Pi 4 system with a camera module for a real-time image processing. The system was fixedly installed in a moving vehicle, and it recorded a video like a vehicle black box. Each frame of the recorded video was extracted, and then the proposed method was tested. The results show that the proposed method is successful for the detection and the recognition of traffic signals.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

증강현실을 이용한 한글의 색상 인식과 자소 패턴 분리 (Color Recognition and Phoneme Pattern Segmentation of Hangeul Using Augmented Reality)

  • 신성윤;최병석;이양원
    • 한국컴퓨터정보학회논문지
    • /
    • 제15권6호
    • /
    • pp.29-35
    • /
    • 2010
  • 증강현실은 저렴한 장비의 보급으로 영상의 사용이 다양화 되면서, 실세계의 영상에 추가적인 이미지 및 영상을 출력할 수 있다. 최근 많은 증강현실 기법이 등장해 있으나 아직까지 정확한 문자 인식을 수행하지는 않고 있다. 본 논문에서는 시각적으로 글자로 표시된 마커를 인식하고, 마커의 글자의 색상과 일치하는 색을 찾아낸다. 그리고 그 글자를 인식하여 화면에 나타내 주는데, 본 논문에서는 수평 프로젝션에 의한 자소 패턴 분리 알고리즘을 적용하여 한글 표현의 6형식에 맞도록 자소를 분리하는 방법을 제시한다. 또한 증강 현실을 이용한 자소 패턴 분리를 실험 예제를 통하여 각 단계별로 진행되는 결과를 보여주었고, 실험 결과 검출률이 90% 이상임을 알 수 있었다.

3D영상 객체인식을 통한 얼굴검출 파라미터 측정기술에 대한 연구 (Object Recognition Face Detection With 3D Imaging Parameters A Research on Measurement Technology)

  • 최병관;문남미
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권10호
    • /
    • pp.53-62
    • /
    • 2011
  • 본 논문에서는 첨단 IT융,복합기술의 발달로 특수 기술로만 여겨졌던 영상객체인식 기술분야가 스마트-폰 기술의 발전과 더불어 개인 휴대용 단말기기로 발전하고 있다. 3D기반의 얼굴인식 검출기술은 객체인식 기술을 통하여 지능형 영상검출 인식기술기술로 진화되고 있음에 따라 영상인식을 통한 얼굴검출기술과 더불어 개발속도가 급속히 발전하고 있다. 본 논문에서는 휴먼인식기술을 기반으로 한 얼굴객체인식 영상검출을 통한 얼굴인식처리 기술의 인지 적용기술을 IP카메라에 적용하여 인가자의 입,출입등의 식별능력을 적용한 휴먼인식을 적용한 얼굴측정 기술에 대한 연구방안을 제안한다. 연구방안은 1)얼굴모델 기반의 얼굴 추적기술을 개발 적용하였고 2)개발된 알고리즘을 통하여 PC기반의 휴먼인식 측정 연구를 통한 기본적인 파라미터 값을 CPU부하에도 얼굴 추적이 가능하며 3)양안의 거리 및 응시각도를 실시간으로 추적할 수 있는 효과를 입증하였다.

영상 신호에서 패턴인식을 이용한 다중 포인트 변위측정 (Displacement Measurement of Multi-Point Using a Pattern Recognition from Video Signal)

  • 전형섭;최영철;박종원
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2008년도 추계학술대회논문집
    • /
    • pp.675-680
    • /
    • 2008
  • This paper proposes a way to measure the displacement of a multi-point by using a pattern recognition from video signal. Generally in measuring displacement, gab sensor, which is a displacement sensor, is used. However, it is difficult to measure displacement by using a common sensor in places where it is unsuitable to attach a sensor, such as high-temperature areas or radioactive places. In this kind of places, non-contact methods should be used to measure displacement and in this study, images of CCD camera were used. When displacement is measure by using camera images, it is possible to measure displacement with a non-contact method. It is simple to install and multi-point displacement measuring device so that it is advantageous to solve problems of spatial constraints.

  • PDF

Recognition and tracking system of moving objects based on artificial neural network and PWM control

  • Sugisaka, M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.573-574
    • /
    • 1992
  • We developed a recognition and tracking system of moving objects. The system consists of one CCD video camera, two DC motors in horizontal and vertical axles with encoders, pluse width modulation(PWM) driving unit, 16 bit NEC 9801 microcomputer, and their interfaces. The recognition and tracking system is able to recognize shape and size of a moving object and is able to track the object within a certain range of errors. This paper presents the brief introduction of the recognition and tracking system developed in our laboratory.

  • PDF

심정지 인지를 위한 동영상 교육과 강의식 교육의 비교 연구 : 청소년을 대상으로 (Comparison of Video Lecture and Instructor-Led Lecture for the Recognition of Cardiac Arrest : Korean Youths)

  • 정은경;이효철
    • 한국산학기술학회논문지
    • /
    • 제19권9호
    • /
    • pp.139-145
    • /
    • 2018
  • 일반인이 심정지를 신속하게 인지하는 것은 심정지 생존에 중요한 첫 단계이다. 본 연구는 심정지 상황에서 발생하는 심정지 호흡을 기존 강의식 교육과 동영상 교육으로 시행한 후, 심정지 인지를 분석하여 효과적인 교육방법을 파악하고자 연구를 수행하였다. 연구설계는 연구대상자를 강의식 교육과 동영상 교육으로 무작위로 배정하여 교육을 수행한 후 심정지인지를 비교한 무작위 대조군 연구이다. 연구기간은 2015년 10월 30일부터 10월 31일까지 시행하였으며, 연구대상은 15세 이상 청소년 104명으로 실험군 52명, 대조군 52명으로 선정하였다. 연구결과, 반응이 없고 호흡이 없는 동영상에서 실험군과 대조군의 차이를 보이지 않았다(p=0.741). 하지만 반응이 없고 심정지 호흡을 보이는 동영상에서는 실험군 중 심폐소생술시행이 43명(82.7%), 대조군 중 심폐소생술 시행이 33명(63.5%)으로 유의한 차이를 나타냈다(p=0.006). 이러한 결과를 통하여 심폐소생술 교육 시 동영상으로 심정지 호흡을 교육하는 것이 기존 강의식 교육에 비해 심정지 인지를 향상시키는 것을 파악할 수 있었다.