• 제목/요약/키워드: 모션 캡쳐 기술

검색결과 67건 처리시간 0.024초

Emerging Trends in 3D Technology Adopted in Apparel Design Research and Product Development (의류학 연구 및 패션산업 현장에 도입되고 있는 3D 기술동향 및 적용사례 고찰)

  • Park, Huiju;Koo, Helen
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • 제42권1호
    • /
    • pp.195-209
    • /
    • 2018
  • This study reviewed emerging trends in 3D technology adopted in apparel design research and product development for rapid prototyping and effective evaluation of product performance. Based on a literature review, the authors discussed technical advantages, practical merits and limitations, applications, and on-going developmental efforts of the following methodologies focusing on 3D body scanning and 3D motion capture, and 3D virtual fit simulation technologies. Such data-driven technical approaches observed in recent apparel design research and industry practice are expected to increasingly be adopted in the field to improve consumers' satisfaction with functionality, aesthetics, and comfort of a wide range of apparel products that include daily wear, sport apparel and protective clothing.

Motion Generation of Articulated Figure Using Minimal Sensors (Minimal Sensors 를 이용한 관절체의 움직임 생성)

  • Lee, Ran-Hee;Lee, In-Ho;Lee, Chil-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 한국정보처리학회 2001년도 추계학술발표논문집 (상)
    • /
    • pp.611-614
    • /
    • 2001
  • 본 논문에서는 7 개의 마그네틱 센서를 이용하여 가상 캐릭터의 자연스런 동작을 재현하는 애니메이션 알고리즘에 대해 기술한다. 이 방법의 특징은 인체 특징점의 위치와 방향정보를 Inverse Kinematics 이론에 적용하고, 이 특징점이 갖는 3 차원 벡터의 법선벡터를 이용하여 관절 방향을 표현하므로서 최소한의 센서로 전 인체의 동작을 재현할 수 있다는 점이다. 이 방법은 퍼스널 컴퓨터를 플랫폼으로 하는 단순한 모션 캡쳐 환경에서도 구현할 수 있으므로 애니메이션을 활용하는 각종 영상 응용 시스템 제작에 유용하게 쓰일 수 있다.

  • PDF

Real-time Position Generation of Intermediate Joints Using Position Information of End-effector (End-effector의 위치정보를 이용한 중간관절의 실시간 위치 생성)

  • 이란희;김성은;박창준;이인호
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 한국멀티미디어학회 2002년도 춘계학술발표논문집(상)
    • /
    • pp.459-464
    • /
    • 2002
  • 본 논문에서는 실시간으로 사람의 움직임을 캡쳐한 영상으로부터 추출된 end-effector의 3차원 위치정보를 이용하여 중간관절의 위치를 생성하는 방법에 대해 기술한다. 이 시스템은 동작자의 좌, 우 전방에 위치한 동기화된 2대의 컬러 CCD 카메라로부터 입력된 스테레오 영상을 분석하여 신체의 중심이 되는 루트와 머리, 손, 발과 같은 end-effector의 특징점을 추출하여 3차원 위치정보를 생성한다. 생성된 루트와 end-effector의 위치정보를 역운동학 알고리듬에 적용하고, 인체 관절의 해부학적인 제약조건을 고려하여 중간관절의 위치를 정밀하게 계산한다. 중간관절의 위치를 생성하므로서 동작자의 모든 관절의 움직임 정보를 실시간으로 획득이 가능하며, 모션데이터로 생성할 수 있으므로 게임이나 애니메이션등 다양한 멀티미디어 분야에서 이용할 수 있다.

  • PDF

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • 제7권3호
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • 제11A권2호
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

The Direction of Research and Development for Reviltalization of K-POP Dance Contents Industry (K-POP 댄스 콘텐츠 산업 활성화를 위한 연구 개발 방향)

  • Kim, Dohyung;Jang, Minsu;Kim, Jaehong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국정보통신학회 2015년도 춘계학술대회
    • /
    • pp.855-858
    • /
    • 2015
  • K-POP dance has been the best contributor for the worldwide spread of K-POP. However, despite the popularity of K-POP dance, Korea has not still secured foundation technology and database required for the global market expansion of K-POP dance contents, which results in stagnation in dance contents industry. This paper suggests the direction of technology development for reviltalization of K-POP dance contents industry. As one of the related studies, we especially introduce the research project conducted by Electronics and Telecommunications Research Institute(ETRI) and address prospects for the technology and its ripple effect.

  • PDF

Digital Motion Capture for Types and Shapes of 3D Character Animation (디지털 모션 캡쳐(Motion Capture)를 위한 3D캐릭터 애니메이션의 종류별, 형태별 모델 분류)

  • Yun, Hwang-Rok;Ryu, Seuc-Ho;Lee, Dong-Lyeor
    • The Journal of the Korea Contents Association
    • /
    • 제7권8호
    • /
    • pp.102-108
    • /
    • 2007
  • Among culture industry that greet digital generation and is observed 21th century the most representative game industry latest is caught what and more interest degree is rising. 2D and 3D animation accomplish continuous growth and development depending action expression along with development of computer technology, and 2D and 3D animation practical use extent are trend that is widening the area in TV, movie, GAME industry etc. through computer hardware and fast change of software technology. The trend of latest game graphic is trend that the weight is changing from 2D to 3D by 3D game and activation of 3D game character that raise player's immersion stuff and Control in 2D's simplicity manufacturing game balance for one side. This treatise that is reality of 3D game character to classify kind of (Motion Capture) and 3D character animation, form model the sense put. Recognize that is overview and reality of 3D game character first for this about example, and is considered to efficiency is high game industry and digital contents industry hereafter by proposing kind model classification of 3D game character animation, form model classification data and character animation manufacture process that application is possible at fast time and effect in 3D character animation application are big.

Simulation of Virtual Marionette with 3D Animation Data (3D Animation Data를 활용한 가상 Marionette 시뮬레이션)

  • Oh, Eui-Sang;Sung, Jung-Hwan
    • The Journal of the Korea Contents Association
    • /
    • 제9권12호
    • /
    • pp.1-9
    • /
    • 2009
  • A doll created by various materials is a miniature based on human model, and it has been one of components in a puppet show to take some responsibility for human's culture activity. However, demand and supply keeps on the decrease in the puppet show industry, since professional puppeteer has been reduced rapidly, and also it is difficult to initiate into the skill. Therefore, many studies related Robotic Marionette for automation of puppet show have been internationally accompanied, and more efficient structure design and process development are required for better movement and express of puppet with motor based controller. In this research, we suggest the effective way to enable to express the marionette's motion using motion data based on motion capture and 3D graphic program, and through applying of 3D motion data and proposal of simulation process, it will be useful to save time and expenses when the Robotic Marionette System is practically built.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • 제7권10호
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman (극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작)

  • Oh, Moon-Seok;Han, Gyu-Hoon;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • 제27권5호
    • /
    • pp.751-761
    • /
    • 2022
  • With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.