• Title/Summary/Keyword: Computer vision technology

Search Result 666, Processing Time 0.027 seconds

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

AR-based Tangible Interaction Using a Finger Fixture for Digital Handheld Products (손가락 고정구를 이용한 휴대용 전자제품의 증강현실기반 감각형 상호작용)

  • Park, Hyung-Jun;Moon, Hee-Cheol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.1
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we propose an AR-based tangible interaction using a finger fixture for virtual evaluation of digital handheld products. To realize tangible interaction between a user and a product in a computer-vision based AR environment, we uses two types of tangible objects: a product-type object and a finger fixture. The product-type object is used to acquire the position and orientation of the product, and the finger fixture is used to recognize the position of a finger tip. The two objects are fabricated by RP technology and AR markers are attached to them. The finger fixture is designed to satisfy various requirements with an ultimate goal that the user holding the finger fixture in his or her index finger can create HMI events by touching specified regions (buttons or sliders) of the product-type object with the finger tip. By assessing the accuracy of the proposed interaction, we have found that it can be applied to a wide variety of digital handheld products whose button size is not less than 6 mm. After performing the design evaluation of several handheld products using the proposed AR-based tangible interaction, we received highly encouraging feedback from users since the proposed interaction is intuitive and tangible enough to provide a feeling like manipulating products with human hands.

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.4
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.

Development of Augmented Reality Based 3D Model Interaction User-Interface for Supporting Ship Design Drawing Information (선박 설계도면 정보 제공을 위한 증강현실 기반의 3D 모델 상호작용 사용자 인터페이스 개발)

  • Oh, Youn-Jae;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1933-1940
    • /
    • 2013
  • Recently, due to improvement of computer performance and development of information devices, technology of mobile augmented reality is proliferating rapidly. However, because most of contents are passive or limitary, user can not feel interest and immersion. This paper designs interaction user interface system of 2 dimensional drawing based on mobile augmented reality to make bi-directional communication between the real world and the virtual world possible by using the vision based augmented reality and the database system.

Video Motion Analysis for Sudden Death Detection During Sleeping (수면 중 돌연사 감지를 위한 비디오 모션 분석 방법)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.10
    • /
    • pp.603-609
    • /
    • 2018
  • Sudden death during sleep often occurs in different age groups. To prevent an unexpected sudden death, sleep monitoring is required. This paper presents a video analysis method to detect sudden death without using any attachable sensors. In the proposed method, a motion magnification technique detects even very subtle motion during sleep. If the magnification cannot detect motion, the proposed method readily decides on abnormal status (possibly sudden death). Experimental results on two kinds of sleep video show that motion magnification-based video analysis could be useful for discriminating sleep (with very subtle motion) from sudden death.

A Study of Shiitake Disease and Pest Image Analysis based on Deep Learning (딥러닝 기반 표고버섯 병해충 이미지 분석에 관한 연구)

  • Jo, KyeongHo;Jung, SeHoon;Sim, ChunBo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.1
    • /
    • pp.50-57
    • /
    • 2020
  • The work that detection and elimination to disease and pest have important in agricultural field because it is directly related to the production of the crops, early detection and treatment of the disease insects. Image classification technology based on traditional computer vision have not been applied in part such as disease and pest because that is falling a accuracy to extraction and classification of feature. In this paper, we proposed model that determine to disease and pest of shiitake based on deep-CNN which have high image recognition performance than exist study. For performance evaluation, we compare evaluation with Alexnet to a proposed deep learning evaluation model. We were compared a proposed model with test data and extend test data. The result, we were confirmed that the proposed model had high performance than Alexnet which approximately 48% and 72% such as test data, approximately 62% and 81% such as extend test data.

Object Recognition of Robot Using 3D RFID System

  • Roh, Se-Gon;Park, Jin-Ho;Lee, Young-Hoon;Choi, Hyouk-Ryeol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.62-67
    • /
    • 2005
  • Object recognition in the field of robotics generally has depended on a computer vision system. Recently, RFID(Radio Frequency IDentification) technology has been suggested to support recognition and has been rapidly and widely applied. This paper introduces the more advanced RFID-based recognition. A novel tag named 3D tag, which facilitates the understanding of the object, was designed. The previous RFID-based system only detects the existence of the object, and therefore, the system should find the object and had to carry out a complex process such as pattern match to identify the object. 3D tag, however, not only detects the existence of the object as well as other tags, but also estimates the orientation and position of the object. These characteristics of 3D tag allows the robot to considerably reduce its dependence on other sensors required for object recognition the object. In this paper, we analyze the 3D tag's detection characteristic and the position and orientation estimation algorithm of the 3D tag-based RFID system.

  • PDF

Design and Implementation of Depth Image Based Real-Time Human Detection

  • Lee, SangJun;Nguyen, Duc Dung;Jeon, Jae Wook
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.2
    • /
    • pp.212-226
    • /
    • 2014
  • This paper presents the design and implementation of a pipelined architecture and a method for real-time human detection using depth image from a Time-of-Flight (ToF) camera. In the proposed method, we use Euclidean Distance Transform (EDT) in order to extract human body location, and we then use the 1D, 2D scanning window in order to extract human joint location. The EDT-based human extraction method is robust against noise. In addition, the 1D, 2D scanning window helps extracting human joint locations easily from a distance image. The proposed method is designed using Verilog HDL (Hardware Description Language) as the dedicated hardware architecture based on pipeline architecture. We implement the dedicated hardware architecture on a Xilinx Virtex6 LX750 Field Programmable Gate Arrays (FPGA). The FPGA implementation can run 80 MHz of maximum operating frequency and show over 60fps of processing performance in the QVGA ($320{\times}240$) resolution depth image.

Emotion Recognition Method using Gestures and EEG Signals (제스처와 EEG 신호를 이용한 감정인식 방법)

  • Kim, Ho-Duck;Jung, Tae-Min;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Human Action Recognition Bases on Local Action Attributes

  • Zhang, Jing;Lin, Hong;Nie, Weizhi;Chaisorn, Lekha;Wong, Yongkang;Kankanhalli, Mohan S
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.3
    • /
    • pp.1264-1274
    • /
    • 2015
  • Human action recognition received many interest in the computer vision community. Most of the existing methods focus on either construct robust descriptor from the temporal domain, or computational method to exploit the discriminative power of the descriptor. In this paper we explore the idea of using local action attributes to form an action descriptor, where an action is no longer characterized with the motion changes in the temporal domain but the local semantic description of the action. We propose an novel framework where introduces local action attributes to represent an action for the final human action categorization. The local action attributes are defined for each body part which are independent from the global action. The resulting attribute descriptor is used to jointly model human action to achieve robust performance. In addition, we conduct some study on the impact of using body local and global low-level feature for the aforementioned attributes. Experiments on the KTH dataset and the MV-TJU dataset show that our local action attribute based descriptor improve action recognition performance.