• Title/Summary/Keyword: 모션 캡쳐

Search Result 177, Processing Time 0.025 seconds

Study on User-Friendly Rhythm Dance Game Utilizing Beat Element of Music (음악의 비트 요소를 활용한 사용자 친화적 리듬댄스게임 연구)

  • Yi, Tae-Ha;Jeong, Seung-Hwa;Goo, Bon-Cheol
    • Journal of Korea Game Society
    • /
    • v.15 no.2
    • /
    • pp.43-52
    • /
    • 2015
  • This Study suggests the user-friendly game playing method of rhythm dance game focusing on the natural interaction between music and user's movement. When the existing rhythm dance games have given the limitation to users, this study chose the game playing method using beat that is consist of basic element in dance music for minimizing the limitation. Beat feeling method automatically figures out each body position value extracted with Kinect that divided into three parts of shoulder, knee, and head. Accoring to this process, users can play the game to individual dance style, not fixed rules for game.

The Direction of Research and Development for Reviltalization of K-POP Dance Contents Industry (K-POP 댄스 콘텐츠 산업 활성화를 위한 연구 개발 방향)

  • Kim, Dohyung;Jang, Minsu;Kim, Jaehong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.855-858
    • /
    • 2015
  • K-POP dance has been the best contributor for the worldwide spread of K-POP. However, despite the popularity of K-POP dance, Korea has not still secured foundation technology and database required for the global market expansion of K-POP dance contents, which results in stagnation in dance contents industry. This paper suggests the direction of technology development for reviltalization of K-POP dance contents industry. As one of the related studies, we especially introduce the research project conducted by Electronics and Telecommunications Research Institute(ETRI) and address prospects for the technology and its ripple effect.

  • PDF

A Study on the Application of Character Animation for Motion Analysis Using Motion Capture Data (모션 캡쳐 자료를 이용한 동작 분석용 캐릭터 애니메이션의 적용 방법에 관한 연구)

  • Son, Won-Il;Jin, Young-Wan;Kang, Sang-Hack
    • Korean Journal of Applied Biomechanics
    • /
    • v.17 no.4
    • /
    • pp.37-44
    • /
    • 2007
  • This study compared the Character Studio of 3ds Max and OpenGL to find an adequate modeling method of character animation to be used in motion analysis in the area of motor mechanics. The subject was one male golfer. We obtained the positional coordinates of marks needed by photographing the subject's golf swing motions. Because the method based on the Character Studio used meticulously designed character meshes, it enabled high.level animation but it took a long time in applying physique and demanded the repeated adjustment of each motion data. With the method based on OpenGL, a character completed once could be usable to almost every testee and desired program control was available, but because each character had to be created by making a computer program, it was hard to make characters delicately. Because the method using the Character Studio is actively studied not only in motor mechanics but also in many research areas, it is expected to be more usable in the near future. On the contrary, the method based on OpenGL is easily applicable and allows the convenient use of other mechanical data.

Human-like Arm Movement Planning for Humanoid Robots Using Motion Capture Database (모션캡쳐 데이터베이스를 이용한 인간형 로봇의 인간다운 팔 움직임 계획)

  • Kim, Seung-Su;Kim, Chang-Hwan;Park, Jong-Hyeon;You, Bum-Jae
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.188-196
    • /
    • 2006
  • During the communication and interaction with a human using motions or gestures, a humanoid robot needs not only to look like a human but also to behave like a human to make sure the meanings of the motions or gestures. Among various human-like behaviors, arm motions of the humanoid robot are essential for the communication with people through motions. In this work, a mathematical representation for characterizing human arm motions is first proposed. The human arm motions are characterized by the elbow elevation angle which is determined using the position and orientation of human hands. That representation is mathematically obtained using an approximation tool, Response Surface Method (RSM). Then a method to generate human-like arm motions in real time using the proposed representation is presented. The proposed method was evaluated to generate human-like arm motions when the humanoid robot was asked to move its arms from a point to another point including the rotation of its hand. The example motion was performed using the KIST humanoid robot, MAHRU.

  • PDF

Facial Animation Generation by Korean Text Input (한글 문자 입력에 따른 얼굴 에니메이션)

  • Kim, Tae-Eun;Park, You-Shin
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.4 no.2
    • /
    • pp.116-122
    • /
    • 2009
  • In this paper, we propose a new method which generates the trajectory of the mouth shape for the characters by the user inputs. It is based on the character at a basis syllable and can be suitable to the mouth shape generation. In this paper, we understand the principle of the Korean language creation and find the similarity for the form of the mouth shape and select it as a basic syllable. We also consider the articulation of this phoneme for it and create a new mouth shape trajectory and apply at face of an 3D avatar.

  • PDF

Development of a Real Time Three-Dimensional Motion Capture System by Using Single PSD Unit (단일 PSD를 이용한 실시간 3차원 모션캡쳐 시스템 개발)

  • Jo, Yong-Jun;Oh, Choon-Suk;Ryu, Young-Kee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.11
    • /
    • pp.1074-1080
    • /
    • 2006
  • Motion capture systems are gaining popularity in entertainment, medicine, sports, education, and industry, with animation and gaming applications for entertainment taking the lead. A wide variety of systems are available for motion capture, but most of them are complicated and expensive. In the general class of optical motion capture, two or more optical sensors are needed to measure the 3D positions of the markers attached to the body. Recently, a 3D motion capture system using two Position Sensitive Detector (PSD) optical sensors was introduced to capture high-speed motion of an active infrared LED marker. The PSD-based system, however, is limited by a geometric calibration procedure for two PSD sensor modules that is too difficult for common customers. In this research, we have introduced a new system that used a single PSD sensor unit to obtain 3D positions of active IR LED-based markers. This new system is easy to calibrate and inexpensive.

Cross-covariance 3D Coordinate Estimation Method for Virtual Space Movement Platform (가상공간 이동플랫폼을 위한 교차 공분산 3D 좌표 추정 방법)

  • Jung, HaHyoung;Park, Jinha;Kim, Min Kyoung;Chang, Min Hyuk
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.41-48
    • /
    • 2020
  • Recently, as the demand for the mobile platform market in the virtual/augmented/mixed reality field is increasing, experiential content that gives users a real-world felt through a virtual environment is drawing attention. In this paper, as a method of tracking a tracker for user location estimation in a virtual space movement platform for motion capture of trainees, we present a method of estimating 3D coordinates of the 3D cross covariance through the coordinates of the markers projected on the image. In addition, the validity of the proposed algorithm is verified through rigid body tracking experiments.

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.

Facial Expression Control of 3D Avatar using Motion Data (모션 데이터를 이용한 3차원 아바타 얼굴 표정 제어)

  • Kim Sung-Ho;Jung Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.383-390
    • /
    • 2004
  • This paper propose a method that controls facial expression of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. And we setup its system. The space of expression is created from about 2400 frames consist of motion captured data of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. But this space is not such a space where one state can go to another state via the straight trajectory between them. We derive trajectories between two states from the captured set of expressions in an approximate manner. First, two states are regarded adjacent if the distance between their distance matrices is below a given threshold. Any two states are considered to have a trajectory between them If there is a sequence of adjacent states between them. It is assumed . that one states goes to another state via the shortest trajectory between them. The shortest trajectories are found by dynamic programming. The space of facial expressions, as the set of distance matrices, is multidimensional. Facial expression of 3D avatar Is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the multidimensional scaling(MDS). To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. As a result of that, users estimate that system is very useful to control facial expression of 3D avatar in real-time.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.