• Title/Summary/Keyword: 휴먼 동작 인식

Search Result 16, Processing Time 0.026 seconds

Manipulator with Camera for Mobile Robots (모바일 로봇을 위한 카메라 탑재 매니퓰레이터)

  • Lee Jun-Woo;Choe, Kyoung-Geun;Cho, Hun-Hee;Jeong, Seong-Kyun;Bong, Jae-Hwan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.507-514
    • /
    • 2022
  • Mobile manipulators are getting lime light in the field of home automation due to their mobility and manipulation capabilities. In this paper, we developed a small size manipulator system that can be mounted on a mobile robot as a preliminary study to develop a mobile manipulator. The developed manipulator has four degree-of-freedom. At the end-effector of manipulator, there are a camera and a gripper to recognize and manipulate the object. One of four degree-of-freedom is linear motion in vertical direction for better interaction with human hands which are located higher than the mobile manipulator. The developed manipulator was designed to dispose the four actuators close to the base of the manipulator to reduce rotational inertia of the manipulator, which improves stability of manipulation and reduces the risk of rollover. The developed manipulator repeatedly performed a pick and place task and successfully manipulate the object within the workspace of manipulator.

Hunan Interaction Recognition with a Network of Dynamic Probabilistic Models (동적 확률 모델 네트워크 기반 휴먼 상호 행동 인식)

  • Suk, Heung-Il;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.955-959
    • /
    • 2009
  • In this paper, we propose a novel method for analyzing human interactions based on the walking trajectories of human subjects. Our principal assumption is that an interaction episode is composed of meaningful smaller unit interactions, which we call 'sub-interactions.' The whole interactions are represented by an ordered concatenation or a network of sub-interaction models. From the experiments, we could confirm the effectiveness and robustness of the proposed method by analyzing the inner workings of an interaction network and comparing the performance with other previous approaches.

Statistical Modeling Methods for Analyzing Human Gait Structure (휴먼 보행 동작 구조 분석을 위한 통계적 모델링 방법)

  • Sin, Bong Kee
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.12-22
    • /
    • 2012
  • Today we are witnessing an increasingly widespread use of cameras in our lives for video surveillance, robot vision, and mobile phones. This has led to a renewed interest in computer vision in general and an on-going boom in human activity recognition in particular. Although not particularly fancy per se, human gait is inarguably the most common and frequent action. Early on this decade there has been a passing interest in human gait recognition, but it soon declined before we came up with a systematic analysis and understanding of walking motion. This paper presents a set of DBN-based models for the analysis of human gait in sequence of increasing complexity and modeling power. The discussion centers around HMM-based statistical methods capable of modeling the variability and incompleteness of input video signals. Finally a novel idea of extending the discrete state Markov chain with a continuous density function is proposed in order to better characterize the gait direction. The proposed modeling framework allows us to recognize pedestrian up to 91.67% and to elegantly decode out two independent gait components of direction and posture through a sequence of experiments.

  • PDF

A Study on the Dataset Construction Needed to Realize a Digital Human in Fitness with Single Image Recognition (단일 이미지 인식으로 피트니스 분야 디지털 휴먼 구현에 필요한 데이터셋 구축에 관한 연구)

  • Soo-Hyuong Kang;Sung-Geon Park;Kwang-Young Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.642-643
    • /
    • 2023
  • 피트니스 분야 인공지능 서비스의 성능 개선을 AI모델 개발이 아닌 데이터셋의 품질 개선을 통해 접근하는 방식을 제안하고, 데이터품질의 성능을 평가하는 것을 목적으로 한다. 데이터 설계는 각 분야 전문가 10명이 참여하였고, 단일 시점 영상을 이용한 운동동작 자동 분류에 사용된 모델은 Google의 MediaPipe 모델을 사용하였다. 팔굽혀펴기의 운동동작인식 정확도는 100%로 나타났으나 팔꿉치의 각도 15° 이하였을 때 동작의 횟수를 인식하지 않았고 이 결과 값에 대해 피트니스 전문가의 의견과 불일치하였다. 향후 연구에서는 동작인식의 분류뿐만 아니라 운동량을 연결하여 분석할 수 있는 시스템이 필요하다.

Human Gesture Recognition Technology Based on User Experience for Multimedia Contents Control (멀티미디어 콘텐츠 제어를 위한 사용자 경험 기반 동작 인식 기술)

  • Kim, Yun-Sik;Park, Sang-Yun;Ok, Soo-Yol;Lee, Suk-Hwan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1196-1204
    • /
    • 2012
  • In this paper, a series of algorithms are proposed for controlling different kinds of multimedia contents and realizing interact between human and computer by using single input device. Human gesture recognition based on NUI is presented firstly in my paper. Since the image information we get it from camera is not sensitive for further processing, we transform it to YCbCr color space, and then morphological processing algorithm is used to delete unuseful noise. Boundary Energy and depth information is extracted for hand detection. After we receive the image of hand detection, PCA algorithm is used to recognize hand posture, difference image and moment method are used to detect hand centroid and extract trajectory of hand movement. 8 direction codes are defined for quantifying gesture trajectory, so the symbol value will be affirmed. Furthermore, HMM algorithm is used for hand gesture recognition based on the symbol value. According to series of methods we presented, we can control multimedia contents by using human gesture recognition. Through large numbers of experiments, the algorithms we presented have satisfying performance, hand detection rate is up to 94.25%, gesture recognition rate exceed 92.6%, hand posture recognition rate can achieve 85.86%, and face detection rate is up to 89.58%. According to these experiment results, we can control many kinds of multimedia contents on computer effectively, such as video player, MP3, e-book and so on.

Interactive Communication Web Service in Medical Institutions for the Hearing Impaired (청각 장애인을 위한 의료 기관에서의 쌍방향 소통 웹페이지 개발)

  • Kim Doha;Kim Dohee;Song Yeojin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1047-1048
    • /
    • 2023
  • 청각장애인은 수화 언어, 즉 수어를 통해 의사소통한다. 따라서 본 논문에서는 의료 상황에서 청각 장애인이 겪는 소통의 어려움을 해결하기 위해 의료 상황 중심의 수어 데이터셋을 구축한 뒤, R(2+1)D 딥러닝 모델을 이용해 수어 동작을 영상 단위로 인식하고 분류할 수 있도록 하였다. 그리고 이를 Django를 이용한 웹 사이트로 만들어 사용할 수 있게 하였다. 이 웹 페이지는 청각장애인 개인 뿐만 아니라 의료 사회 전반적으로 긍정적인 효과를 줄 것으로 기대한다.

Real-Time Human Tracker Based on Location and Motion Recognition of User for Smart Home (스마트 홈을 위한 사용자 위치와 모션 인식 기반의 실시간 휴먼 트랙커)

  • Choi, Jong-Hwa;Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.209-216
    • /
    • 2009
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2: image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

The digital transformation of mask dance movement in intangible cultural asset based on human pose recognition (휴먼포즈 인식을 적용한 무형문화재 탈춤 동작 디지털전환)

  • SooHyuong Kang;SungGeon Park;KwangYoung Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.678-680
    • /
    • 2023
  • 본 연구는 2022년 유네스코 인류무형유산 대표목록에 등재된 탈춤 동작을 디지털화하여 후속 세대에게 정보를 제공하는 것을 목적으로 한다. 데이터 수집은 국가무형문화제로 지정된 탈춤 단체 13개, 시도무형문화재 단체 5개에 소속된 무형문화재, 전승자 39명이 관성식 모션 캡처 장비를 착용하고, 8대의 카메라를 이용하여 수집하였다. 데이터 가공은 바운딩박스를 수행하였고, 탈춤동작 추정은 YOLO v8을 사용하였고 탈춤 동작 분류는 YOLO v8에 CNN모델을 결합하여 130개의 탈춤을 분류하였다. 연구결과, mAP-50은 0.953, mAP50-95는 0.596, Accuracy 70%를 달성하였다. 향후 학습용 데이터셋 구축량이 늘어나고, 데이터 품질이 개선된다면 탈춤 분류 성능은 더욱 개선될 것이라 기대한다.

A Mouse Control Method Using Hand Movement Recognition (손동작 인식을 이용한 마우스제어기법)

  • Kim, Jung-In
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1377-1383
    • /
    • 2012
  • This paper proposes a human mouse system that replaces mouse input by human hand movement. As the resolution of monitors increases, it is not quite possible, due to the resolution difference between web cameras and monitors, to place the cursor in the entire range of a monitor by simply moving the pointer which recognizes the position of the hand from the web camera. In this regard, we propose an effective method of placing the position of the mouse, without repeating the returning hand movements, in the corners of the monitor in which the user wants it to be. We also proposes the recognition method of finger movements in terms of using thumb and index finger. The measurement that we conducted shows the successful recognition rate of 97% that corroborates the effectiveness of our method.

Real-Time Human Tracker Based Location and Motion Recognition for the Ubiquitous Smart Home (유비쿼터스 스마트 홈을 위한 위치와 모션인식 기반의 실시간 휴먼 트랙커)

  • Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il;Cuong, Nguyen Quoe
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06d
    • /
    • pp.444-448
    • /
    • 2008
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2:image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

  • PDF