• Title/Summary/Keyword: Human motion detect

Search Result 94, Processing Time 0.034 seconds

Development of Measurement System of Moving Distance Using a Low-Cost Accelerometer

  • Cho, Seong-Yun;Kim, Jin-Ho;Park, Chan-Gook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.130.4-130
    • /
    • 2001
  • In this paper, a measurement system of moving distance is developed. The error compensation method is also proposed using the characteristics of walking motion. As personal navigation systems and multimedia systems are emerging into the commericial market, men´s moving distance is considered as one of the important information. GPS offers the information easily but GPS can be used only when the satellites are visible. INS can calculate the moving distance anywhere but error is increased with time due to the sensor bias. In this paper, to detect the human walking distance a measurement system of moving distance only using low-cost accelerometer is developed. The sensor bias is estimated and compensated using the walking motion characteristics. The performanced of the proposed system is verified by experiment.

  • PDF

Highly Stretchable and Sensitive Strain Sensors Fabricated by Coating Nylon Textile with Single Walled Carbon Nanotubes

  • Park, Da-Seul;kim, Yoonyoung;Jeong, Soo-Hwan
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.363.2-363.2
    • /
    • 2016
  • Stretchable strain sensors are becoming essential in diverse future applications, such as human motion detection, soft robotics, and various biomedical devices. One of the well-known approaches for fabricating stretchable strain sensors is to embed conductive nanomaterials such as metal nanowires/nanoparticles, graphene, conducting polymer and carbon nanotubes (CNTs) within an elastomeric substrate. Among various conducting nanomaterials, CNTs have been considered as important and promising candidate materials for stretchable strain sensors owing to their high electrical conductivity and excellent mechanical properties. In the past decades, CNT-based strain sensors with high stretchability or sensitivity have been developed. However, CNT-based strain sensors which show both high stretchability and sensitivity have not been reported. Herein, highly stretchable and sensitive strain sensors were fabricated by integrating single-walled carbon nanotubes (SWNTs) and nylon textiles via vacuum-assisted spray-layer-by-layer process. Our strain sensors had high sensitivity with 100 % tensile strain (gauge factor ~ 100). Cyclic tests confirmed that our strain sensors showed very robust and reliable characteristic. Moreover, our SWNTs-based strain sensors were easily and successfully integrated on human finger and knee to detect bending and walking motion. Our approach presented here might be route to preparing highly stretchable and sensitive strain sensors with providing new opportunity to realize practical wearable devices.

  • PDF

Pedestrians Action Interpretation based on CUDA for Traffic Signal Control (교통신호제어를 위한 CUDA기반 보행자 행동판단)

  • Lee, Hong-Chang;Rhee, Sang-Yong;Kim, Young-Baek
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.631-637
    • /
    • 2010
  • In this paper, We propose a method of motion interpretation of pedestrian for active traffic signal control. We detect pedestrian object in a movie of crosswalk area by using the code book method and acquire contour information. To do this stage fast, we use parallel processing based on CUDA (Compute Unified Device Architecture). And we remove shadow which causes shape distortion of objects. Shadow removed object is judged by using the hilbert scan distance whether to human or noise. If the objects are judged as a human, we analyze pedestrian objects' motion, face area feature, waiting time to decide that they have intetion to across a crosswalk for pdestrians. Traffic signal can be controlled after judgement.

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

Statistical Model for Emotional Video Shot Characterization (비디오 셧의 감정 관련 특징에 대한 통계적 모델링)

  • 박현재;강행봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1200-1208
    • /
    • 2003
  • Affective computing plays an important role in intelligent Human Computer Interactions(HCI). To detect emotional events, it is desirable to construct a computing model for extracting emotion related features from video. In this paper, we propose a statistical model based on the probabilistic distribution of low level features in video shots. The proposed method extracts low level features from video shots and then from a GMM(Gaussian Mixture Model) for them to detect emotional shots. As low level features, we use color, camera motion and sequence of shot lengths. The features can be modeled as a GMM by using EM(Expectation Maximization) algorithm and the relations between time and emotions are estimated by MLE(Maximum Likelihood Estimation). Finally, the two statistical models are combined together using Bayesian framework to detect emotional events in video.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

Human Robot Interaction Using Face Direction Gestures

  • Kwon, Dong-Soo;Bang, Hyo-Choong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.171.4-171
    • /
    • 2001
  • This paper proposes a method of human- robot interaction (HRI) using face directional gesture. A single CCD color camera is used to input face region, and the robot recognizes the face directional gesture based on the facial feature´s positions. One can give a command such as stop, go, left and right turn to the robot using the face directional gesture. Since the robot also has the ultra sonic sensors, it can detect obstacles and determine a safe direction at the current position. By combining the user´s command with the sensed obstacle configuration, the robot selects the safe and efficient motion direction. From simulation results, we show that the robot with HRI is more reliable for the robot´s navigation.

  • PDF

A study on the voice command recognition at the motion control in the industrial robot (산업용 로보트의 동작제어 명령어의 인식에 관한 연구)

  • 이순요;권규식;김홍태
    • Journal of the Ergonomics Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.3-10
    • /
    • 1991
  • The teach pendant and keyboard have been used as an input device of control command in human-robot sustem. But, many problems occur in case that the usef is a novice. So, speech recognition system is required to communicate between a human and the robot. In this study, Korean voice commands, eitht robot commands, and ten digits based on the broad phonetic analysis are described. Applying broad phonetic analysis, phonemes of voice commands are divided into phoneme groups, such as plosive, fricative, affricative, nasal, and glide sound, having similar features. And then, the feature parameters and their ranges to detect phoneme groups are found by minimax method. Classification rules are consisted of combination of the feature parameters, such as zero corssing rate(ZCR), log engery(LE), up and down(UD), formant frequency, and their ranges. Voice commands were recognized by the classification rules. The recognition rate was over 90 percent in this experiment. Also, this experiment showed that the recognition rate about digits was better than that about robot commands.

  • PDF

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

Multiple Dimension User Motion Detection System base on Wireless Sensors (무선센서 기반 다차원 사용자 움직임 탐지 시스템)

  • Kim, Jeong-Rae;Jeong, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.3
    • /
    • pp.700-712
    • /
    • 2011
  • Due to recently advanced electrical devices, human can access computer network regardless of working location or time restriction. However, currently widely used mouse, joystick, and trackball input system are not easy to carry and they bound user hands exclusively within working space. Those make user inconvenient in Ubiquitous environments.. In this paper, we propose multiple dimension human motion detection system based on wireless sensor networks. It is a portable input device and provides easy installation process and unbinds user hands during input processing stages. Our implemented system is comprised of three components. One is input unit that senses user motions and transmits collected data to receiver. Second is receiver that conveys the received data to application, which runs on server computer. Third is application that performs command operations according to received data. Experiments shows that proposed system accurately detect the characteristics of user arm motions and fully support corresponding input requests.