Abstract
For human-robot interaction, a robot should recognize the meaning of human behavior. In the case of static behavior such as face expression and sign language, the information contained in a single image is sufficient to deliver the meaning to the robot. In the case of dynamic behavior such as gestures, however, the information of sequential images is required. This paper proposes behavior classification by using fuzzy classifier to deliver the meaning of dynamic behavior to the robot. The proposed method extracts feature points from input images by a skeleton model, generates a vector space from a differential image of the extracted feature points, and uses this information as the learning data for fuzzy classifier. Finally, we show the effectiveness and the feasibility of the proposed method through experiments.