• Title/Summary/Keyword: 휴먼모션

Search Result 25, Processing Time 0.023 seconds

Radar Image Extraction Scheme for FMCW Radar-Based Human Motion Indication (FMCW 레이다 기반 휴먼 모션 인지용 레이다 영상 추출 기법)

  • Hyun, Eugin;Jin, Young-Seok;Jeon, Hyeong-Cheol
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.6
    • /
    • pp.411-414
    • /
    • 2018
  • In this paper, we propose a radar image extraction scheme for frequency modulated continuous wave radar-based human motion indication. We extracted three-dimensional(3D) range-velocity-angle spectra and generated three micro-profile images by compressing the 3D images in all three directions in every frame. Furthermore, we used body echo suppression to make use of the weak reelection such as in hands and arms. By applying the complete images to classifiers, various human motions can be indicated.

Case Study : Cinematography using Digital Human in Tiny Virtual Production (초소형 버추얼 프로덕션 환경에서 디지털 휴먼을 이용한 촬영 사례)

  • Jaeho Im;Minjung Jang;Sang Wook Chun;Subin Lee;Minsoo Park;Yujin Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.21-31
    • /
    • 2023
  • In this paper, we introduce a case study of cinematography using digital human in virtual production. This case study deals with the system overview of virtual production using LEDs and an efficient filming pipeline using digital human. Unlike virtual production using LEDs, which mainly project the background on LEDs, in this case, we use digital human as a virtual actor to film scenes communicating with a real actor. In addition, to film the dialogue scene between the real actor and the digital human using a real-time engine, we automatically generated speech animation of the digital human in advance by applying our Korean lip-sync technology based on audio and text. We verified this filming case by using a real-time engine to produce short drama content using real actor and digital human in an LED-based virtual production environment.

Vision-based human motion analysis for event recognition (휴먼 모션 분석을 통한 이벤트 검출 및 인식)

  • Cui, Yao-Huan;Lee, Chang-Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.219-222
    • /
    • 2009
  • 최근 컴퓨터비젼 분야에서 이벤트 검출 및 인식이 활발히 연구되고 있으며, 도전적인 주제들 중 하나이다. 이벤트 검출 기술들은 많은 감시시스템들에서 유용하고 효율적인 응용 분야이다. 본 논문에서는 사무실 환경에서 발생할 수 있는 이벤트의 검출 및 인식을 위한 방법을 제안한다. 제안된 방법에서의 이벤트는 입장( entering), 퇴장(exiting), 착석(sitting-down), 기립(standing-up)으로 구성된다. 제안된 방법은 하드웨어적인 센서를 사용하지 않고, MHI(Motion History Image) 시퀀스(sequence)를 이용한 인간의 모션 분석을 통해 이벤트를 검출할 수 있는 방법이며, 사람의 체형과 착용한 옷의 종류와 색상, 그라고 카메라로부터의 위치관계에 불변한 특성을 가진다. 에지검출 기술을 HMI 시퀀스정보와 결합하여 사람 모션의 기하학적 특징을 추출한 후, 이 정보를 이벤트 인식의 기본 특징으로 사용한다. 제안된 방법은 단순한 이벤트 검출 프레임웍을 사용하기 때문에 검출하고자 하는 이벤트의 설명만을 첨가하는 것으로 확장이 가능하다. 또한, 제안된 방법은 컴퓨터비견 기술에 기반한 많은 감시시스템에 적용이 가능하다.

  • PDF

Deep Learning-Based Human Motion Denoising (딥 러닝 기반 휴먼 모션 디노이징)

  • Kim, Seong Uk;Im, Hyeonseung;Kim, Jongmin
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1295-1301
    • /
    • 2019
  • In this paper, we propose a novel method of denoising human motion using a bidirectional recurrent neural network (BRNN) with an attention mechanism. The corrupted motion captured from a single 3D depth sensor camera is automatically fixed in the well-established smooth motion manifold. Incorporating an attention mechanism into BRNN achieves better optimization results and higher accuracy than other deep learning frameworks because a higher weight value is selectively given to a more important input pose at a specific frame for encoding the input motion. Experimental results show that our approach effectively handles various types of motion and noise, and we believe that our method can sufficiently be used in motion capture applications as a post-processing step after capturing human motion.

Motion Monitoring using Mask R-CNN for Articulation Disease Management (관절질환 관리를 위한 Mask R-CNN을 이용한 모션 모니터링)

  • Park, Sung-Soo;Baek, Ji-Won;Jo, Sun-Moon;Chung, Kyungyong
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.1-6
    • /
    • 2019
  • In modern society, lifestyle and individuality are important, and personalized lifestyle and patterns are emerging. The number of people with articulation diseases is increasing due to wrong living habits. In addition, as the number of households increases, there is a case where emergency care is not received at the appropriate time. We need information that can be managed by ourselves through accurate analysis according to the individual's condition for health and disease management, and care appropriate to the emergency situation. It is effectively used for classification and prediction of data using CNN in deep learning. CNN differs in accuracy and processing time according to the data features. Therefore, it is necessary to improve processing speed and accuracy for real-time healthcare. In this paper, we propose motion monitoring using Mask R-CNN for articulation disease management. The proposed method uses Mask R-CNN which is superior in accuracy and processing time than CNN. After the user's motion is learned in the neural network, if the user's motion is different from the learned data, the control method can be fed back to the user, the emergency situation can be informed to the guardian, and appropriate methods can be taken according to the situation.

H-Anim-based Definition of Character Animation Data (캐릭터 애니메이션 데이터의 H-Anim 기반 정의)

  • Lee, Jae-Wook;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.796-800
    • /
    • 2009
  • Currently, there are many software tools that can generate 3D human figure models and animations based on the advancement of computer graphics technology. However, we still have problems in interoperability of human data models in different applications because common data models do not exist. To address this issue, the Web3D Consortium and the ISO/IEC JTC1 SC24 WG6 have developed the H-Anim standard. However, H-Anim does not include human motion data formats although it defines the structure of a human figure. This research is intended to obtain interoperable human animation by defining the data for human motions in H- Anim figures. In this paper, we describe a syntactic method to define motion data for the H-Anim figure and its implementation. In addition, we describe a method of specifying motion parameters necessary for generating animations by using an arbitrary character model data set created by a general graphics tool.

Design of Avatar Rehabilitation Content Service with Limited Range of Motion (관절 가동 범위의 제한 정보를 반영한 아바타 기반 재활 운동 콘텐츠 서비스 설계)

  • Yoon, Chang-Rak;Chang, Yoon-Seop;Kim, Jae-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.990-992
    • /
    • 2021
  • 근골격계 질환 환자들은 정상인들에 비해 질환 부위의 관절 가동 범위(ROM, Range of Motion)가 제한되는 경향이 있다. 이러한 근골격계 질환 환자들의 관절 가동 범위 제한을 고려하지 않은 재활 운동 콘텐츠 서비스는 오히려 환자의 건강 상태를 악화시킬 수도 있으므로 주의해야 하는 서비스 요인이다. 본 논문에서는 근골격계 질환 환자의 제한적인 관절 가동 범위를 고려한 아바타 기반의 재활 운동 콘텐츠 서비스 기술을 제안한다. 이에 재활 운동의 모션 캡처 데이터로부터 아바타 재활 운동 콘텐츠로의 변환 기술과 근골격계 질환 환자의 관절 가동 범위 제한 정보를 적용한 아바타 기반 재활 운동 콘텐츠 재현 기술을 설계한다. 일련의 기술적 구성 요소를 고찰하고 설계함으로써 근골격계 질환 환자들의 서로 다른 관절 가동 범위를 반영한 맞춤형 재활 운동 콘텐츠 서비스가 안전하고 효과적인 재활을 지원할 수 있도록 한다.

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman (극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작)

  • Oh, Moon-Seok;Han, Gyu-Hoon;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.751-761
    • /
    • 2022
  • With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.

The Design of Digital Human Content Creation System (디지털 휴먼 컨텐츠 생성 시스템의 설계)

  • Lee, Sang-Yoon;Lee, Dae-Sik;You, Young-Mo;Lee, Kye-Hun;You, Hyeon-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.4
    • /
    • pp.271-282
    • /
    • 2022
  • In this paper, we propose a digital human content creation system. The digital human content creation system works with 3D AI modeling through whole-body scanning, and is produced with 3D modeling post-processing, texturing, rigging. By combining this with virtual reality(VR) content information, natural motion of the virtual model can be achieved in virtual reality, and digital human content can be efficiently created in one system. Therefore, there is an effect of enabling the creation of virtual reality-based digital human content that minimizes resources. In addition, it is intended to provide an automated pre-processing process that does not require a pre-processing process for 3D modeling and texturing by humans, and to provide a technology for efficiently managing various digital human contents. In particular, since the pre-processing process such as 3D modeling and texturing to construct a virtual model are automatically performed by artificial intelligence, so it has the advantage that rapid and efficient virtual model configuration can be achieved. In addition, it has the advantage of being able to easily organize and manage digital human contents through signature motion.

Recognition of Events by Human Motion for Context-aware Computing (상황인식 컴퓨팅을 위한 사람 움직임 이벤트 인식)

  • Cui, Yao-Huan;Shin, Seong-Yoon;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.47-57
    • /
    • 2009
  • Event detection and recognition is an active and challenging topic recent in Computer Vision. This paper describes a new method for recognizing events caused by human motion from video sequences in an office environment. The proposed approach analyzes human motions using Motion History Image (MHI) sequences, and is invariant to body shapes. types or colors of clothes and positions of target objects. The proposed method has two advantages; one is thant the proposed method is less sensitive to illumination changes comparing with the method using color information of objects of interest, and the other is scale invariance comparing with the method using a prior knowledge like appearances or shapes of objects of interest. Combined with edge detection, geometrical characteristics of the human shape in the MHI sequences are considered as the features. An advantage of the proposed method is that the event detection framework is easy to extend by inserting the descriptions of events. In addition, the proposed method is the core technology for event detection systems based on context-aware computing as well as surveillance systems based on computer vision techniques.