• Title/Summary/Keyword: human engineering

Search Result 11,329, Processing Time 0.035 seconds

Realizing a Mixed Reality Space Guided by a Virtual Human;Creating a Virtual Human from Incomplete 3-D Motion Data

  • Abe, Shinsuke;Yamaguti, Iku;Tan, Joo Kooi;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1625-1628
    • /
    • 2003
  • Recently the VR technique has evolved into a mixed reality (MR) technique, in which a user can observe a real world in front of him/her as well as virtual objects displayed. This has been realized by the employment of a see-through type HMD (S-HMD). We have been developing a mixed reality space employing the MR technique. The objective of our study is to realize a virtual human that acts as a man-machine interface in the real space. It is important in the study to create a virtual human acting naturally in front of a user. In order to give natural motions to the virtual human, we employ a developed motion capture technique. We have already created various 3-D human motion models by the motion capture technique. In this paper, we present a technique for creating a virtual human using a human model provided by a computer graphics software, 3D Studio Max(C). The main difficulty of this issue is that 3D Studio Max(C) claims 28 feature points for describing a human motion, but the used motion capture system assumes less number of feature points. Therefore a technique is proposed in the paper for producing motion data of 28 feature points from the motion data of less number of feature points or from incomplete motion data. Performance of the proposed technique was examined by observing visually the demonstration of some motions of a created virtual human and overall natural motions were realized.

  • PDF

Hands-on Tools to Prevent Human Errors in Highway Construction (고속도로 건설현장의 인적오류 예방을 위한 실무자용 도구 개발)

  • Kim, Jung-Yong;Yoon, Sang-Young;Cho, Young-Jin
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.19-28
    • /
    • 2011
  • Objective: The aim of this study is to reclassify human errors and to develop hands-on tools to apply the new classification for preventing human error accidents in highway construction site. Background: The main cause of accidents in highway construction was reported as the carelessness of workers. However, such diagnosis could not help us operationally prevent accidents in real workplace. Method: The accidents in highway construction were reanalyzed and the causes of human error were reclassified in order to educate and improve the awareness of human error in highway construction. Field survey and interview with safety managers and workers were conducted to find the causal relationship between the actual accidents and the human errors. Results: The most frequently observed human errors in highway construction were classified into six categories such as mis-perception, distraction, memory fail, slip, cognition error and mis-judgment. In order to provide hands-on tools to increase the awareness of human error in construction field, the human error checklist and card sorting diary were developed. Especially, the card sorting diary was designed to increase the ability in human error inspection of safety manager at construction site. Moreover, posters were developed based on actual accident cases. Conclusion: We suggested that the improved awareness and analytical report on checklist, card sorting diary and posters for construction field could collectively prevent the accident. Application: The classification of human error, hands-on tools and posters can be directly applicable on highway construction site. This analytical and collective approach preventing human error-related accident could be extended to other construction workplaces.

A Kidnapping Detection Using Human Pose Estimation in Intelligent Video Surveillance Systems

  • Park, Ju Hyun;Song, KwangHo;Kim, Yoo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, a kidnapping detection scheme in which human pose estimation is used to classify accurately between kidnapping cases and normal ones is proposed. To estimate human poses from input video, human's 10 joint information is extracted by OpenPose library. In addition to the features which are used in the previous study to represent the size change rates and the regularities of human activities, the human pose estimation features which are computed from the location of detected human's joints are used as the features to distinguish kidnapping situations from the normal accompanying ones. A frame-based kidnapping detection scheme is generated according to the selection of J48 decision tree model from the comparison of several representative classification models. When a video has more frames of kidnapping situation than the threshold ratio after two people meet in the video, the proposed scheme detects and notifies the occurrence of kidnapping event. To check the feasibility of the proposed scheme, the detection accuracy of our newly proposed scheme is compared with that of the previous scheme. According to the experiment results, the proposed scheme could detect kidnapping situations more 4.73% correctly than the previous scheme.

Motion Visualization of a Vehicle Driver Based on Virtual Reality (가상현실 기반에서 차량 운전자 거동의 가시화)

  • Jeong, Yun-Seok;Son, Kwon;Choi, Kyung-Hyun
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.11 no.5
    • /
    • pp.201-209
    • /
    • 2003
  • Virtual human models are widely used to save time and expense in vehicle safety studies. A human model is an essential tool to visualize and simulate a vehicle driver in virtual environments. This research is focused on creation and application of a human model fer virtual reality. The Korean anthropometric data published are selected to determine basic human model dimensions. These data are applied to GEBOD, a human body data generation program, which computes the body segment geometry, mass properties, joints locations and mechanical properties. The human model was constituted using MADYMO based on data from GEBOD. Frontal crash and bump passing test were simulated and the driver's motion data calculated were transmitted into the virtual environment. The human model was organized into scene graphs and its motion was visualized by virtual reality techniques including OpenGL Performer. The human model can be controlled by an arm master to test driver's behavior in the virtual environment.

Human Action Recognition Based on 3D Human Modeling and Cyclic HMMs

  • Ke, Shian-Ru;Thuc, Hoang Le Uyen;Hwang, Jenq-Neng;Yoo, Jang-Hee;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • v.36 no.4
    • /
    • pp.662-672
    • /
    • 2014
  • Human action recognition is used in areas such as surveillance, entertainment, and healthcare. This paper proposes a system to recognize both single and continuous human actions from monocular video sequences, based on 3D human modeling and cyclic hidden Markov models (CHMMs). First, for each frame in a monocular video sequence, the 3D coordinates of joints belonging to a human object, through actions of multiple cycles, are extracted using 3D human modeling techniques. The 3D coordinates are then converted into a set of geometrical relational features (GRFs) for dimensionality reduction and discrimination increase. For further dimensionality reduction, k-means clustering is applied to the GRFs to generate clustered feature vectors. These vectors are used to train CHMMs separately for different types of actions, based on the Baum-Welch re-estimation algorithm. For recognition of continuous actions that are concatenated from several distinct types of actions, a designed graphical model is used to systematically concatenate different separately trained CHMMs. The experimental results show the effective performance of our proposed system in both single and continuous action recognition problems.

Human Detection in Overhead View and Near-Field View Scene

  • Jung, Sung-Hoon;Jung, Byung-Hee;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.860-868
    • /
    • 2008
  • Human detection techniques in outdoor scenes have been studied for a long time to watch suspicious movements or to keep someone from danger. However there are few methods of human detection in overhead or near-field view scenes, while lots of human detection methods in far-field view scenes have been developed. In this paper, a set of five features useful for human detection in overhead view scenes and another set of four useful features in near-field view scenes are suggested. Eight feature-candidates are first extracted by analyzing geometrically varying characteristics of moving objects in samples of video sequences. Then highly contributed features for each view scene to classifying human from other moving objects are selected among them by using a neural network learning technique. Through experiments with hundreds of moving objects, we found that each set of features is very useful for human detection and classification accuracy for overhead view and near-field view scenes was over 90%. The suggested sets of features can be used effectively in a PTZ camera based surveillance system where both the overhead and near-field view scenes appear.

  • PDF

Anthropomorphic Animal Face Masking using Deep Convolutional Neural Network based Animal Face Classification

  • Khan, Rafiul Hasan;Lee, Youngsuk;Lee, Suk-Hwan;Kwon, Oh-Jun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.558-572
    • /
    • 2019
  • Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. Anthropomorphic animal face masking is the process by which human characteristics are plotted on the animal kind. In this research, we are proposing a compact system which finds the resemblance between a human face and animal face using Deep Convolutional Neural Network (DCNN) and later applies morphism between them. The whole process is done by firstly finding which animal most resembles the particular human face through a DCNN based animal face classification. And secondly, doing triangulation based morphing between the particular human face and the most resembled animal face. Compared to the conventional manual Control Point Selection system using an animator, we are proposing a Viola-Jones algorithm based Control Point selection process which detects facial features for the human face and takes the Control Points automatically. To initiate our approach, we built our own dataset containing ten thousand animal faces and a fourteen layer DCNN. The simulation results firstly demonstrate that the accuracy of our proposed DCNN architecture outperforms the related methods for the animal face classification. Secondly, the proposed morphing method manages to complete the morphing process with less deformation and without any human assistance.

Torque Estimation of the Human Elbow Joint using the MVS (Muscle Volume Sensor) (근 부피 센서를 이용한 인체 팔꿈치 관절의 동작 토크 추정)

  • Lee, Hee Don;Lim, Dong Hwan;Kim, Wan Soo;Han, Jung Soo;Han, Chang Soo;An, Jae Yong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.6
    • /
    • pp.650-657
    • /
    • 2013
  • This study uses a muscle activation sensor and elbow joint model to develop an estimation algorithm for human elbow joint torque for use in a human-robot interface. A modular-type MVS (Muscle Volume Sensor) and calibration algorithm are developed to measure the muscle activation signal, which is represented through the normalization of the calibrated signal of the MVS. A Hill-type model is applied to the muscle activation signal and the kinematic model of the muscle can be used to estimate the joint torques. Experiments were performed to evaluate the performance of the proposed algorithm by isotonic contraction motion using the KIN-COM$^{(R)}$ equipment at 5, 10, and 15Nm. The algorithm and its feasibility for use as a human-robot interface are verified by comparing the joint load condition and the torque estimated by the algorithm.