• Title/Summary/Keyword: human features

Search Result 2,012, Processing Time 0.05 seconds

Extracting Features of Human Knowledge Systems for Active Knowledge Management Systems

  • Yuan Miao;Robert Gay;Siew, Chee-Kheong;Shen, Zhi-Qi
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.265-271
    • /
    • 2001
  • It is highly for the research in artificial intelligence area to be able to manage knowledge as human beings do. One of the fantastic natures that human knowledge management systems have is being active. Human beings actively manage their knowledge, solve conflicts and make inference. It makes a major difference from artificial intelligent systems. This paper focuses on the discussion of the features of that human knowledge systems, which underlies the active nature. With the features extracted, further research can be done to construct a suitable infrastructure to facilitate these features to build a man-made active knowledge management system. This paper proposed 10 features that human beings follow to maintain their knowledge. We believe it will advance the evolution of active knowledge management systems by realizing these features with suitable knowledge representation/decision models and software agent technology.

  • PDF

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.

Improved DT Algorithm Based Human Action Features Detection

  • Hu, Zeyuan;Lee, Suk-Hwan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.4
    • /
    • pp.478-484
    • /
    • 2018
  • The choice of the motion features influences the result of the human action recognition method directly. Many factors often influence the single feature differently, such as appearance of the human body, environment and video camera. So the accuracy of action recognition is restricted. On the bases of studying the representation and recognition of human actions, and giving fully consideration to the advantages and disadvantages of different features, the Dense Trajectories(DT) algorithm is a very classic algorithm in the field of behavior recognition feature extraction, but there are some defects in the use of optical flow images. In this paper, we will use the improved Dense Trajectories(iDT) algorithm to optimize and extract the optical flow features in the movement of human action, then we will combined with Support Vector Machine methods to identify human behavior, and use the image in the KTH database for training and testing.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

Human Detection in Overhead View and Near-Field View Scene

  • Jung, Sung-Hoon;Jung, Byung-Hee;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.860-868
    • /
    • 2008
  • Human detection techniques in outdoor scenes have been studied for a long time to watch suspicious movements or to keep someone from danger. However there are few methods of human detection in overhead or near-field view scenes, while lots of human detection methods in far-field view scenes have been developed. In this paper, a set of five features useful for human detection in overhead view scenes and another set of four useful features in near-field view scenes are suggested. Eight feature-candidates are first extracted by analyzing geometrically varying characteristics of moving objects in samples of video sequences. Then highly contributed features for each view scene to classifying human from other moving objects are selected among them by using a neural network learning technique. Through experiments with hundreds of moving objects, we found that each set of features is very useful for human detection and classification accuracy for overhead view and near-field view scenes was over 90%. The suggested sets of features can be used effectively in a PTZ camera based surveillance system where both the overhead and near-field view scenes appear.

  • PDF

Human Activity Recognition Using Spatiotemporal 3-D Body Joint Features with Hidden Markov Models

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2767-2780
    • /
    • 2016
  • Video-based human-activity recognition has become increasingly popular due to the prominent corresponding applications in a variety of fields such as computer vision, image processing, smart-home healthcare, and human-computer interactions. The essential goals of a video-based activity-recognition system include the provision of behavior-based information to enable functionality that proactively assists a person with his/her tasks. The target of this work is the development of a novel approach for human-activity recognition, whereby human-body-joint features that are extracted from depth videos are used. From silhouette images taken at every depth, the direction and magnitude features are first obtained from each connected body-joint pair so that they can be augmented later with motion direction, as well as with the magnitude features of each joint in the next frame. A generalized discriminant analysis (GDA) is applied to make the spatiotemporal features more robust, followed by the feeding of the time-sequence features into a Hidden Markov Model (HMM) for the training of each activity. Lastly, all of the trained-activity HMMs are used for depth-video activity recognition.

Anti-Spoofing Method for Iris Recognition by Combining the Optical and Textural Features of Human Eye

  • Lee, Eui Chul;Son, Sung Hoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2424-2441
    • /
    • 2012
  • In this paper, we propose a fake iris detection method that combines the optical and textural features of the human eye. To extract the optical features, we used dual Purkinje images that were generated on the anterior cornea and the posterior lens surfaces based on an analytic model of the human eye's optical structure. To extract the textural features, we measured the amount of change in a given iris pattern (based on wavelet decomposition) with regard to the direction of illumination. This method performs the following two procedures over previous researches. First, in order to obtain the optical and textural features simultaneously, we used five illuminators. Second, in order to improve fake iris detection performance, we used a SVM (Support Vector Machine) to combine the optical and textural features. Through combining the features, problems of single feature based previous works could be solved. Experimental results showed that the EER (Equal Error Rate) was 0.133%.

Traded control of telerobot system with an autonomous visual sensor feedback (자율적인 시각 센서 피드백 기능을 갖는 원격 로보트 시스템교환 제어)

  • 김주곤;차동혁;김승호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.940-943
    • /
    • 1996
  • In teleoperating, as seeing the monitor screen obtained from a camera instituted in the working environment, human operator generally controls the slave arm. Because we can see only 2-D image in a monitor, human operator does not know the depth information and can not work with high accuracy. In this paper, we proposed a traded control method using an visual sensor for the purpose of solving this problem. We can control a teleoperation system with precision when we use the proposed algorithm. Not only a human operator command but also an autonomous visual sensor feedback command is given to a slave arm for the purpose of coincidence current image features and target image features. When the slave arm place in a distant place from the target position, human operator can know very well the difference between the desired image features and the current image features, but calculated visual sensor command have big errors. And when the slave arm is near the target position, the state of affairs is changed conversely. With this visual sensor feedback, human does not need coincide the detail difference between the desired image features and the current image features and proposed method can work with higher accuracy than other method without, sensor feedback. The effectiveness of the proposed control method is verified through series of experiments.

  • PDF

Development of human-in-the-loop experiment system to extract evacuation behavioral features: A case of evacuees in nuclear emergencies

  • Younghee Park;Soohyung Park;Jeongsik Kim;Byoung-jik Kim;Namhun Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.6
    • /
    • pp.2246-2255
    • /
    • 2023
  • Evacuation time estimation (ETE) is crucial for the effective implementation of resident protection measures as well as planning, owing to its applicability to nuclear emergencies. However, as confirmed in the Fukushima case, the ETE performed by nuclear operators does not reflect behavioral features, exposing thus, gaps that are likely to appear in real-world situations. Existing research methods including surveys and interviews have limitations in extracting highly feasible behavioral features. To overcome these limitations, we propose a VR-based immersive experiment system. The VR system realistically simulates nuclear emergencies by structuring existing disasters and human decision processes in response to the disasters. Evacuation behavioral features were quantitatively extracted through the proposed experiment system, and this system was systematically verified by statistical analysis and a comparative study of experimental results based on previous research. In addition, as part of future work, an application method that can simulate multi-level evacuation dynamics was proposed. The proposed experiment system is significant in presenting an innovative methodology for quantitatively extracting human behavioral features that have not been comprehensively studied in evacuation. It is expected that more realistic evacuation behavioral features can be collected through additional experiments and studies of various evacuation factors in the future.

Features Detection in Face eased on The Model (모델 기반 얼굴에서 특징점 추출)

  • 석경휴;김용수;김동국;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.134-138
    • /
    • 2002
  • The human faces do not have distinct features unlike other general objects. In general the features of eyes, nose and mouth which are first recognized when human being see the face are defined. These features have different characteristics depending on different human face. In this paper, We propose a face recognition algorithm using the hidden Markov model(HMM). In the preprocessing stage, we find edges of a face using the locally adaptive threshold scheme and extract features based on generic knowledge of a face, then construct a database with extracted features. In training stage, we generate HMM parameters for each person by using the forward-backward algorithm. In the recognition stage, we apply probability values calculated by the HMM to input data. Then the input face is recognized by the euclidean distance of face feature vector and the cross-correlation between the input image and the database image. Computer simulation shows that the proposed HMM algorithm gives higher recognition rate compared with conventional face recognition algorithms.

  • PDF