• Title/Summary/Keyword: 모션 모델링

Search Result 70, Processing Time 0.025 seconds

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.

Documentation of Intangible Cultural Heritage Using Motion Capture Technology Focusing on the documentation of Seungmu, Salpuri and Taepyeongmu (부록 3. 모션캡쳐를 이용한 무형문화재의 기록작성 - 국가지정 중요무형문화재 승무·살풀이·태평무를 중심으로 -)

  • Park, Weonmo;Go, Jungil;Kim, Yongsuk
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.351-378
    • /
    • 2006
  • With the development of media, the methods for the documentation of intangible cultural heritage have been also developed and diversified. As well as the previous analogue ways of documentation, the have been recently applying new multi-media technologies focusing on digital pictures, sound sources, movies, etc. Among the new technologies, the documentation of intangible cultural heritage using the method of 'Motion Capture' has proved itself prominent especially in the fields that require three-dimensional documentation such as dances and performances. Motion Capture refers to the documentation technology which records the signals of the time varing positions derived from the sensors equipped on the surface of an object. It converts the signals from the sensors into digital data which can be plotted as points on the virtual coordinates of the computer and records the movement of the points during a certain period of time, as the object moves. It produces scientific data for the preservation of intangible cultural heritage, by displaying digital data which represents the virtual motion of a holder of an intangible cultural heritage. National Research Institute of Cultural Properties (NRICP) has been working on for the development of new documentation method for the Important Intangible Cultural Heritage designated by Korean government. This is to be done using 'motion capture' equipments which are also widely used for the computer graphics in movie or game industries. This project is designed to apply the motion capture technology for 3 years- from 2005 to 2007 - for 11 performances from 7 traditional dances of which body gestures have considerable values among the Important Intangible Cultural Heritage performances. This is to be supported by lottery funds. In 2005, the first year of the project, accumulated were data of single dances, such as Seungmu (monk's dance), Salpuri(a solo dance for spiritual cleansing dance), Taepyeongmu (dance of peace), which are relatively easy in terms of performing skills. In 2006, group dances, such as Jinju Geommu (Jinju sword dance), Seungjeonmu (dance for victory), Cheoyongmu (dance of Lord Cheoyong), etc., will be documented. In the last year of the project, 2007, education programme for comparative studies, analysis and transmission of intangible cultural heritage and three-dimensional contents for public service will be devised, based on the accumulated data, as well as the documentation of Hakyeonhwadae Habseolmu (crane dance combined with the lotus blossom dance). By describing the processes and results of motion capture documentation of Salpuri dance (Lee Mae-bang), Taepyeongmu (Kang seon-young) and Seungmu (Lee Mae-bang, Lee Ae-ju and Jung Jae-man) conducted in 2005, this report introduces a new approach for the documentation of intangible cultural heritage. During the first year of the project, two questions have been raised. First, how can we capture motions of a holder (dancer) without cutoffs during quite a long performance? After many times of tests, the motion capture system proved itself stable with continuous results. Second, how can we reproduce the accurate motion without the re-targeting process? The project re-created the most accurate motion of the dancer's gestures, applying the new technology to drew out the shape of the dancers's body digital data before the motion capture process for the first time in Korea. The accurate three-dimensional body models for four holders obtained by the body scanning enhanced the accuracy of the motion capture of the dance.

Statistical Model for Emotional Video Shot Characterization (비디오 셧의 감정 관련 특징에 대한 통계적 모델링)

  • 박현재;강행봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1200-1208
    • /
    • 2003
  • Affective computing plays an important role in intelligent Human Computer Interactions(HCI). To detect emotional events, it is desirable to construct a computing model for extracting emotion related features from video. In this paper, we propose a statistical model based on the probabilistic distribution of low level features in video shots. The proposed method extracts low level features from video shots and then from a GMM(Gaussian Mixture Model) for them to detect emotional shots. As low level features, we use color, camera motion and sequence of shot lengths. The features can be modeled as a GMM by using EM(Expectation Maximization) algorithm and the relations between time and emotions are estimated by MLE(Maximum Likelihood Estimation). Finally, the two statistical models are combined together using Bayesian framework to detect emotional events in video.

Virtual Brake Pressure Sensor Using Vehicle Yaw Rate Feedback (차량 요레이트 피드백을 통한 가상 제동 압력 센서 개발)

  • You, Seung-Han
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.1
    • /
    • pp.113-120
    • /
    • 2016
  • This paper presents observer-based virtual sensors for YMC(Yaw Moment Control) systems by differential braking. A high-fidelity empirical model of the hydraulic unit in YMC system was developed for a model-based observer design. Optimal, adaptive, and robust observers were then developed and their estimation accuracy and robustness against model uncertainty were investigated via HILS tests. The HILS results indicate that the proposed disturbance attenuation approach indeed exhibits more satisfactory pressure estimation performance than the other approach with admissible degradation against the predefined model disturbance.

Development of Web-based User Script Linking System for Three-dimensional Robot Simulation (3차원 로봇 시뮬레이션 환경을 위한 웹 기반의 사용자 스크립트 연동 시스템 개발)

  • Yang, Jeong-Yean
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.2
    • /
    • pp.469-476
    • /
    • 2019
  • Robotic motion is designed by the rotation and the translation of multiple joint coordinates in a three-dimensional space. Joint coordinates are generally modeled by homogeneous transform matrix. However, the complexity of three dimensional motions prefers the visualization methods based on simulation environments in which models and generated motions work properly. Many simulation environments have the limitations of usability and functional extension from platform dependency and interpretation of predefined commands. This paper proposes the web-based three dimensional simulation environment toward high user accessibility. Also, it covers the small size web server that is linked with Python script. The non linearities of robot control apply to verify the computing efficiency, the process management, and the extendability of user scripts.

Exaggerating Character Motions Using Quadratic Deformation (이차 변형을 이용한 캐릭터 동작의 과장 기법)

  • Kwon, Ji-Yong;Lee, In-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.611-615
    • /
    • 2010
  • In this paper, we propose a method that exaggerate a character motion using quadratic deformation. While the previous methods tend to exaggerate a rotational motion of an individual joint angle, our method attempt to model the poses of a whole body at each frame and exaggerate those, so that the whole-pose action of the character can be exaggerated. Our method can be computed in real-time, and prevents a joint motion that rotates unexpected direction.

Interactive Drama System (인터랙티브 드라마 시스템)

  • Kang, Woo-Jin;Lee, Je-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.11 no.3
    • /
    • pp.41-48
    • /
    • 2005
  • 시청자의 개입에 따라 내용이 변하는 영상은 영상산업 및 컴퓨터 그래픽스 분야에서 대중들의 새로운 흥미를 불러일으킨다. 그러나, 그러한 영상을 생성하는 시스템을 만드는 일은 제한된 개수의 영상으로 다양하면서 온전한 영상의 변화를 만들어야 하고, 사용자에게 그 변화를 조정하는 권한을 주어야 하기 때문에 어려운 일이다. 본 논문에서는 이 문제를 해결하는 한 방법으로 개별적인 실사 촬영물을 이용하여 인터랙티브 드라마를 생성하는 시스템을 제안한다. 이 시스템은 개별적인 영상 알갱이들을 부드럽게 연결하여, 다양한 줄거리와 완결된 구조를 갖춘 드라마를 생성한다. 또한, 사용자는 생성된 드라마의 내용, 길이, 장르, 등장인물을 원하는 대로 바꿀 수 있다. 이러한 시스템을 만들기 위하여 씬(scene)을 새로운 방법으로 모델링 하였고, 씬들을 적절히 선택하여 연결하기 위한 방법으로 씬 그래프(scene graph)를 제안한다. 최종 영상과 사용자와의 상호 작용을 위해서는 비젼, 모션, 그리고 스케치 기반 인터페이스를 제시한다. 끝으로 설문 조사를 통해 이 시스템의 유용성을 평가한다.

  • PDF

Neural Oscillator based Two-link Robot Arm Control (Neural Oscillator 특성을 활용한 2축 링크 로봇 팔 제어)

  • Kwon, J.S.;Yang, W.;Park, G.T.;You, B.J.
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1813-1814
    • /
    • 2008
  • 본 논문에서는 생물학적 운동 메카니즘을 유사하게 구현하기 위해 신경 진동자를 이용한 로봇 팔 제어 시스템을 제안한다. 인간 및 동물의 주기적인 자율 운동을 관장하는 Central Pattern Generator (CPG)를 수학적으로 모델링한 신경 진동자는 그 중요 특성의 하나인 entrainment 효과를 보여준다. 일반적으로 우리는 이 기능을 이용하여 미지의 외부 환경 변화와 같은 외란에 적절히 상호 작용할 수 있는 운동을 생성해 낼 수 있다. 이러한 결과를 보이기 위해, 각 관절에 가상의 신경 진동자 모델을 결합하였고 외부 환경의 변화나 외란의 감지를 위한 F/T센서를 팔의 말단에 부착하여 시스템을 구현하였다. 신경 진동자 모델을 결합한 2축 링크 로봇 팔 시스템(real time)은 주어진 목적운동을 (원 운동) 수행함과 동시에 미지의 외부 환경의 변화(임의의 벽)를 인지하여 적절한 모션을 생성하는 지를 살펴본다.

  • PDF

Real-time Facial Modeling and Animation based on High Resolution Capture (고해상도 캡쳐 기반 실시간 얼굴 모델링과 표정 애니메이션)

  • Byun, Hae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1138-1145
    • /
    • 2008
  • Recently, performance-driven facial animation has been popular in various area. In television or game, it is important to guarantee real-time animation for various characters with different appearances between a performer and a character. In this paper, we present a new facial animation approach based on motion capture. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. Finally, we show the results of various examination for different types of face models.

  • PDF

Quality Metric Modeling with Full Video Scalbility (비디오 스케일러빌리티를 고려한 영상 품질 메트릭 모델링)

  • Suh, Dong-Jun;Kim, Cheon-Seog;Bae, Tae-Meon;Ro, Yong-Man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.81-84
    • /
    • 2006
  • 다양한 멀티미디어 사용 환경 조건에 적합한 컨텐츠를 제공하기 위해서는 다양한 소비 환경을 반영할 수 있는 스케일러블 비디오의 제공이 필요하다. 이러한 스케일러블 비디오의 경우, 다양한 조합의 프레임 율, SNR, 공간해상도를 가지는 비디오가 가능하므로 사용자에 게 최적의 품질을 제공하는 조합을 결정해야 할 필요가 있다. 따라서 본 논문에서는 프레임율, SNR, 공간해상도 그리고 영상의 모션 속도에 따라 변동할 수 있는 영상 품질을 나타낼 수 있는 새로운 영상 품질 메트릭을 주관적 평가를 통하여 제안한다. 제안한 품질 메트릭은 주관적 품질평가 선호도와의 상관계수(correlation coefficient)가 PSNR과 주관적 품질평가 점수와의 상관계수 평균값에 비해 높은 값을 보였다. 본 논문에서 제안한 품질 메트릭은 다양한 조합의 부호화 조건에 따른 품질측정이 가능하여 제한적인 멀티미디어 소비환경에서 최적의 부호화 조건을 결정할 수 있다.

  • PDF