• Title/Summary/Keyword: robot assist

Search Result 104, Processing Time 0.017 seconds

Multiple Human Recognition for Networked Camera based Interactive Control in IoT Space

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.1
    • /
    • pp.39-45
    • /
    • 2019
  • We propose an active color model based method for tracking motions of multiple human using a networked multiple-camera system in IoT space as a human-robot coexistent system. An IoT space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of IoT space as well. One of the main goals of IoT space is to assist humans and to do different services for them. In order to be capable of doing that, IoT space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and IoT space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in IoT space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Gait Phase Estimation Method Adaptable to Changes in Gait Speed on Level Ground and Stairs (평지 및 계단 환경에서 보행 속도 변화에 대응 가능한 웨어러블 로봇의 보행 위상 추정 방법)

  • Hobin Kim;Jongbok Lee;Sunwoo Kim;Inho Kee;Sangdo Kim;Shinsuk Park;Kanggeon Kim;Jongwon Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.182-188
    • /
    • 2023
  • Due to the acceleration of an aging society, the need for lower limb exoskeletons to assist gait is increasing. And for use in daily life, it is essential to have technology that can accurately estimate gait phase even in the walking environment and walking speed of the wearer that changes frequently. In this paper, we implement an LSTM-based gait phase estimation learning model by collecting gait data according to changes in gait speed in outdoor level ground and stair environments. In addition, the results of the gait phase estimation error for each walking environment were compared after learning for both max hip extension (MHE) and max hip flexion (MHF), which are ground truth criteria in gait phase divided in previous studies. As a result, the average error rate of all walking environments using MHF reference data and MHE reference data was 2.97% and 4.36%, respectively, and the result of using MHF reference data was 1.39% lower than the result of using MHE reference data.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Factor Analysis of Elementary School Student's Learning Satisfaction after the Robot utilized STEAM Education (로봇 활용 STEAM 교육에 참가한 초등학생들의 학습지속 요인분석)

  • Shin, Seung-Young
    • The Journal of Korean Association of Computer Education
    • /
    • v.15 no.5
    • /
    • pp.11-22
    • /
    • 2012
  • This study aimed to analyze applying TAM model the process that flow factors such as 'harmony of challenge and technology' exert effects on learners' attitudes of keeping learning in STEAM class employing robots. For the study, the 'Energy and Tools' chapter of the science textbook for the 6th grade's second semester was re-arranged, and applied for 189 students, and among them, only the 174 usable data were used for the analysis. As a result of analysis, students' learning immersion factor(factor of harmony of challenge and technology) had deeper effects on the factor of ease of learning than usefulness of learning and this in turn, had an effect on their intention to keep learning ultimately through the factor of value of learning as the study found. As a result of research, it was found that for indications identified, in order to use robots in STEAM class, for the students' intention to keep learning, it's essential for learners to have proper and active attitudes towards learning and basic knowledge of robots, and aspects of values should be considered that based on this, robot can assist in learning and affect results of learning in STEAM class. On the other hand, the factors of ease of learning and the combination of the challenge and technology do not gives direct (+) effect on the intention to continue learning and the value for learning, respectively. However, each of the two factor has indirect influence on each of the dependent variable within the significant range, which is the reason the author includes the result of the analysis.

  • PDF