• 제목/요약/키워드: Computer vision technology

검색결과 666건 처리시간 0.025초

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

손가락 고정구를 이용한 휴대용 전자제품의 증강현실기반 감각형 상호작용 (AR-based Tangible Interaction Using a Finger Fixture for Digital Handheld Products)

  • 박형준;문희철
    • 한국CDE학회논문집
    • /
    • 제16권1호
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we propose an AR-based tangible interaction using a finger fixture for virtual evaluation of digital handheld products. To realize tangible interaction between a user and a product in a computer-vision based AR environment, we uses two types of tangible objects: a product-type object and a finger fixture. The product-type object is used to acquire the position and orientation of the product, and the finger fixture is used to recognize the position of a finger tip. The two objects are fabricated by RP technology and AR markers are attached to them. The finger fixture is designed to satisfy various requirements with an ultimate goal that the user holding the finger fixture in his or her index finger can create HMI events by touching specified regions (buttons or sliders) of the product-type object with the finger tip. By assessing the accuracy of the proposed interaction, we have found that it can be applied to a wide variety of digital handheld products whose button size is not less than 6 mm. After performing the design evaluation of several handheld products using the proposed AR-based tangible interaction, we received highly encouraging feedback from users since the proposed interaction is intuitive and tangible enough to provide a feeling like manipulating products with human hands.

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권4호
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.

선박 설계도면 정보 제공을 위한 증강현실 기반의 3D 모델 상호작용 사용자 인터페이스 개발 (Development of Augmented Reality Based 3D Model Interaction User-Interface for Supporting Ship Design Drawing Information)

  • 오연재;김응곤
    • 한국전자통신학회논문지
    • /
    • 제8권12호
    • /
    • pp.1933-1940
    • /
    • 2013
  • 최근 컴퓨터 성능 향상 및 정보 기기의 발달로 모바일 증강현실은 급속한 발전을 이루고 있다. 그러나 대부분 콘텐츠가 수동적이거나 대화식이라 하더라도 제한적인 자유도만 주어져 사용자들에게 흥미와 몰입을 부여하기에는 한계점이 존재한다. 본 논문에서는 비전기반 증강현실 시스템과 데이터베이스 관리 시스템 연동을 통한 기존 증강현실 시스템을 보강하여 현실세계와 가상세계의 양방향 통신이 가능하도록 인터랙션 UI를 개선하여 활용성을 높이도록 모바일 증강현실 기반의 2D도면 인터랙션 UI 시스템을 설계한다.

수면 중 돌연사 감지를 위한 비디오 모션 분석 방법 (Video Motion Analysis for Sudden Death Detection During Sleeping)

  • 이승호
    • 한국산학기술학회논문지
    • /
    • 제19권10호
    • /
    • pp.603-609
    • /
    • 2018
  • 수면 중 돌연사는 급성 심근경색 등의 이유로 노인 뿐 만 아니라 영아나 20~40대와 같은 비교적 젊은 층에서도 종종 발생하고 있다. 수면 중 돌연사는 미리 예측하기 어려우므로 이를 방지하기 위해서는 수면 모니터링이 필요하다. 본 논문에서는 별도의 센서 부착 없이도 수면 중 돌연사 감지를 할 수 있는 새로운 비디오 분석 방법을 제안한다. 제안하는 비디오 분석 방법에서는 호흡에 의한 미세 움직임을 감지하기 위해 모션 증폭 기법을 적용한다. 모션 증폭을 적용했는데도 프레임 간 차이가 거의 없는 경우, 모션이 존재하지 않아 돌연사 가능성이 있는 것으로 판단한다. 수면 중인 아기를 촬영한 비디오 두 편에 대해 모션 증폭을 적용한 결과, 호흡에 의한 미세 모션을 정확하게 감지하였고, 이는 수면 상태와 돌연사를 구분하는데 유용할 것으로 판단되었다. 제안하는 비디오 분석 방법은 신체에 센서 부착을 필요로 하지 않으므로 아기를 키우는 가정이나 독신 가정에서 편리하게 활용될 수 있을 것이다.

딥러닝 기반 표고버섯 병해충 이미지 분석에 관한 연구 (A Study of Shiitake Disease and Pest Image Analysis based on Deep Learning)

  • 조경호;정세훈;심춘보
    • 한국멀티미디어학회논문지
    • /
    • 제23권1호
    • /
    • pp.50-57
    • /
    • 2020
  • The work that detection and elimination to disease and pest have important in agricultural field because it is directly related to the production of the crops, early detection and treatment of the disease insects. Image classification technology based on traditional computer vision have not been applied in part such as disease and pest because that is falling a accuracy to extraction and classification of feature. In this paper, we proposed model that determine to disease and pest of shiitake based on deep-CNN which have high image recognition performance than exist study. For performance evaluation, we compare evaluation with Alexnet to a proposed deep learning evaluation model. We were compared a proposed model with test data and extend test data. The result, we were confirmed that the proposed model had high performance than Alexnet which approximately 48% and 72% such as test data, approximately 62% and 81% such as extend test data.

Object Recognition of Robot Using 3D RFID System

  • Roh, Se-Gon;Park, Jin-Ho;Lee, Young-Hoon;Choi, Hyouk-Ryeol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.62-67
    • /
    • 2005
  • Object recognition in the field of robotics generally has depended on a computer vision system. Recently, RFID(Radio Frequency IDentification) technology has been suggested to support recognition and has been rapidly and widely applied. This paper introduces the more advanced RFID-based recognition. A novel tag named 3D tag, which facilitates the understanding of the object, was designed. The previous RFID-based system only detects the existence of the object, and therefore, the system should find the object and had to carry out a complex process such as pattern match to identify the object. 3D tag, however, not only detects the existence of the object as well as other tags, but also estimates the orientation and position of the object. These characteristics of 3D tag allows the robot to considerably reduce its dependence on other sensors required for object recognition the object. In this paper, we analyze the 3D tag's detection characteristic and the position and orientation estimation algorithm of the 3D tag-based RFID system.

  • PDF

Design and Implementation of Depth Image Based Real-Time Human Detection

  • Lee, SangJun;Nguyen, Duc Dung;Jeon, Jae Wook
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제14권2호
    • /
    • pp.212-226
    • /
    • 2014
  • This paper presents the design and implementation of a pipelined architecture and a method for real-time human detection using depth image from a Time-of-Flight (ToF) camera. In the proposed method, we use Euclidean Distance Transform (EDT) in order to extract human body location, and we then use the 1D, 2D scanning window in order to extract human joint location. The EDT-based human extraction method is robust against noise. In addition, the 1D, 2D scanning window helps extracting human joint locations easily from a distance image. The proposed method is designed using Verilog HDL (Hardware Description Language) as the dedicated hardware architecture based on pipeline architecture. We implement the dedicated hardware architecture on a Xilinx Virtex6 LX750 Field Programmable Gate Arrays (FPGA). The FPGA implementation can run 80 MHz of maximum operating frequency and show over 60fps of processing performance in the QVGA ($320{\times}240$) resolution depth image.

제스처와 EEG 신호를 이용한 감정인식 방법 (Emotion Recognition Method using Gestures and EEG Signals)

  • 김호덕;정태민;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Human Action Recognition Bases on Local Action Attributes

  • Zhang, Jing;Lin, Hong;Nie, Weizhi;Chaisorn, Lekha;Wong, Yongkang;Kankanhalli, Mohan S
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권3호
    • /
    • pp.1264-1274
    • /
    • 2015
  • Human action recognition received many interest in the computer vision community. Most of the existing methods focus on either construct robust descriptor from the temporal domain, or computational method to exploit the discriminative power of the descriptor. In this paper we explore the idea of using local action attributes to form an action descriptor, where an action is no longer characterized with the motion changes in the temporal domain but the local semantic description of the action. We propose an novel framework where introduces local action attributes to represent an action for the final human action categorization. The local action attributes are defined for each body part which are independent from the global action. The resulting attribute descriptor is used to jointly model human action to achieve robust performance. In addition, we conduct some study on the impact of using body local and global low-level feature for the aforementioned attributes. Experiments on the KTH dataset and the MV-TJU dataset show that our local action attribute based descriptor improve action recognition performance.