• Title/Summary/Keyword: Text Motion

Search Result 84, Processing Time 0.027 seconds

A Case Study on AI-Driven <DEEPMOTION> Motion Capture Technology

  • Chen Xi;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.87-92
    • /
    • 2024
  • The rapid development of artificial intelligence technology in recent years is evident, from the emergence of ChatGPT to innovations like Midjourney, Stable Diffution, and the upcoming SORA text-to-video technology by OPENai. Animation capture technology, driven by the AI technology trend, is undergoing significant advancements, accelerating the progress of the animation industry. Through an analysis of the current application of DEEPMOTION, this paper explores the development direction of AI motion capture technology, analyzes issues such as errors in multi-person object motion capture, and examines the vast prospects. With the continuous advancement of AI technology, the ability to recognize and track complex movements and expressions faster and more accurately, reduce human errors, enhance processing speed and efficiency. This advancement lowers technological barriers and accelerates the fusion of virtual and real worlds.

Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication (스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발)

  • Oh, Byung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.143-150
    • /
    • 2017
  • In this paper, we propose a novel motion recognition platform using smart-phone tracking and color communication. The interface requires only a camera and a personal smart-phone to provide a motion control interface rather than expensive equipment. The platform recognizes the user's gestures by the tracking 3D distance and the rotation angle of the smart-phone, which acts essentially as a motion controller in the user's hand. Also, a color coded communication method using RGB color combinations is included within the interface. Users can conveniently send or receive any text data through this function, and the data can be transferred continuously even while the user is performing gestures. We present the result that implementation of viable contents based on the proposed motion recognition platform.

Study on improvement of the pupil motion recognition algorithm for human-computer interface system (사람 기계간 의사소통 시스템을 위한 눈동자 모션 인식 알고리즘 개선에 대한 연구)

  • Heo, Seung Won;Lee, Hee Bin;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.377-378
    • /
    • 2018
  • This paper introduce the improvement of the pupil motion recognition algorithm in the previously reported "Eye-Motion Communication System using FPGA and OpenCV". It is a system for generalized paralysis and Lou Gehrig patients who can not move their body naturally, recognizing the pupil's motion and selecting the text in the FPGA in real time. In this paper, we improve the speed of motion recognition by minimizing the operation of eye detection function based on the user being general paralysis patient.

  • PDF

A study on the Interactive Expression of Human Emotions in Typography

  • Lim, Sooyeon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.122-130
    • /
    • 2022
  • In modern times, text has become an image, and typography is a style that is a combination of image and text that can be easily encountered in everyday life. It is developing not only for the purpose of conveying meaningful communication, but also to bring joy and beauty to our lives as a medium with aesthetic format. This study shows through case analysis that typography is a tool for expressing human emotions, and investigates its characteristics that change along with the media. In particular, interactive communication tools and methods used by interactive typography to express viewers' emotions are described in detail. We created interactive typography using the inputted text, the selected music by the viewer and the viewer's movement. As a result of applying it to the exhibition, we could confirm that interactive typography can function as an effective communication medium that shows the utility of both the iconography of letter signs and the cognitive function when combined with the audience's intentional motion.

A Model to Automatically Generate Non-verbal Expression Information for Korean Utterance Sentence (한국어 발화 문장에 대한 비언어 표현 정보를 자동으로 생성하는 모델)

  • Jaeyoon Kim;Jinyea Jang;San Kim;Minyoung Jung;Hyunwook Kang;Saim Shin
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.91-94
    • /
    • 2023
  • 자연스러운 상호작용이 가능한 인공지능 에이전트를 개발하기 위해서는 언어적 표현뿐 아니라, 비언어적 표현 또한 고려되어야 한다. 본 논문에서는 한국어 발화문으로부터 비언어적 표현인 모션을 생성하는 연구를 소개한다. 유튜브 영상으로부터 데이터셋을 구축하고, Text to Motion의 기존 모델인 T2M-GPT와 이종 모달리티 데이터를 연계 학습한 VL-KE-T5의 언어 인코더를 활용하여 구현한 모델로 실험을 진행하였다. 실험 결과, 한국어 발화 텍스트에 대해 생성된 모션 표현은 FID 스코어 0.11의 성능으로 나타났으며, 한국어 발화 정보 기반 비언어 표현 정보 생성의 가능성을 보여주었다.

  • PDF

The Analysis of the Level of Technological Maturity for the u-Learning of Public Education by Mobile Phone (휴대폰을 이용한 공교육 u-러닝의 기술 성숙도 분석)

  • Lee, Jae-Won;Na, Eun-Gu;Song, Gil-Ju
    • IE interfaces
    • /
    • v.19 no.4
    • /
    • pp.306-315
    • /
    • 2006
  • In this paper we analyze whether we can use the mobile phone having been highly distributed into young generation as a device for the u-learning in Korean public education. For this purpose we deal with the technical maturity in three axes. Firstly, we examine the authoring nature of mobile internet-based contents such as both text and motion picture for the contents developers in the public education. As a research result the authoring of text has almost no difficulty, but that of the motion picture shows some problems. Secondly, we deal with whether u-learners can easily get and use u-contents on both mobile phone and PC respectively. After analysing this factor, we found that the downloading of motion picture contents into mobile phone is very limited. Therfore we talk about the usability and problem of various PC Sync tools and propose their standardization. Finally, the needs of the introduction of the ubiquitous SCORM which could enable to reuse u-contents among different Korean telco’s mobile phones are discussed. Here we describe some functionality of both ubiquitous SCORM and u-LMS. Our study looks like almost the first work examining the technological maturity for the introduction of u-learning with mobile phone in Korean public education and it could be used as a reference for the study of any other wireless telecommunication-based u-learning other than mobile telecommunication.

Wavelet Based Compression Technique for Efficient Image Transmission in the Wireless Multimedia Sensor Networks (무선 멀티미디어 센서 네트워크에서 효율적인 이미지 전송을 위한 웨이블릿 기반 압축 기법)

  • Kwon, Young-Wan;Lee, Joa-Hyoung;Jung, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2323-2329
    • /
    • 2008
  • Advances in wireless communication and hardware technology have made it possible to manufacture high-performance tiny sensor nodes. More recently, the availability of inexpensive cameras modules that are able to capture multimedia data from the environment has fostered the development of Wireless Multimedia Sensor Networks(WMSNs). WMSN supplements the a advanced technique that senses, transmits, and processes the multimedia contents upon the text based traditional wireless sensor network. Since the amount of data which the multimedia contents have, is significantly larger than that of text based data, multimedia contents require lots of computing power and high network bandwidth. To process the multimedia contents on the wireless sensor node which has very limited computing power and energy, a technique for WMSN should take account of computing resource and efficient transmission. In the paper, we propose a new image compression technique YWCE for efficient compression and transmission of image data in WMSN. YWCE introduces 4 type of technique for motion estimation and compensation based on the Resolution Scalability of Wavelet. Experimental result shows that YWCE has high compression performance with different set of 4 type.

Postural Control Strategies on Smart Phone use during Gait in Over 50-year-old Adults (50세 이상 성인의 보행 시 스마트폰 사용에 따른 자세 조절 전략)

  • Yu, Yeon Joo;Lee, Ki Kwang;Lee, Jung Ho;Kim, Suk Bum
    • Korean Journal of Applied Biomechanics
    • /
    • v.29 no.2
    • /
    • pp.71-77
    • /
    • 2019
  • Objective: The aim of this study was to investigate postural control strategies on smart phone use during gait in over 50-year-old adults. Method: 8 elderly subjects (age: $55.5{\pm}3.29yrs$, height: $159.75{\pm}4.20cm$, weight: $62.87{\pm}8.44kg$) and 10 young subjects (age: $23.8{\pm}3.19yrs$, height: $158.8{\pm}5.97cm$, weight: $53.6{\pm}5.6kg$) participated in the study. They walked at a comfortable pace in a gaitway of ~8 m while: 1) reading text on a smart phone, 2) typing text on a smart phone, or 3) walking without the use of a phone. Gait parameters and kinematic data were evaluated using a three-dimensional movement analysis system. Results: The participants read or wrote text messages they walked with: slower speed; lesser stride length and step width; greater flexion range of motion of the head; more flexion of the thorax in comparison with normal walking. Conclusion: Texting or reading message on a smart phone while walking may pose an additional risk to pedestrians' safety.

Development of Video Caption Editor with Kinetic Typography (글자가 움직이는 동영상 자막 편집 어플리케이션 개발)

  • Ha, Yea-Young;Kim, So-Yeon;Park, In-Sun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.385-392
    • /
    • 2014
  • In this paper, we developed an Android application named VIVID where users can edit the moving captions easily on smartphone videos. This makes it convenient to set the time range, text, location and motion of caption text on the video. The editing result is uploaded to web server in html and can be shared with other users.