• Title/Summary/Keyword: 3D animation

Search Result 668, Processing Time 0.029 seconds

A Study on Contents Manufactur ing System for Massive Contents Production

  • Ji, Su-Mi;Lee, Jeong-Joong;Kwon, Sang-Pill;Kim, Jin-Guk;Yu, Chang-Man;Lee, Jeong-Gyu;Jeon, Se-Jong;Jeong, Tae-Wan;Kang, Dong-Wann;Park, Sang-Il;Song, Oh-Young;Lee, Jong-Weon;Yoon, Kyung-Hyun;Han, Chang-Wan;Baik, Sung-Wook
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1832-1842
    • /
    • 2010
  • This paper introduces a new automatic processing system: "Contents Factory" for the mass production of contents. Through the contents factory, we provide an authoring environment to improve the usability and the efficiency in producing contents. The contents factory integrates recycling techniques for contents resources, contents development engines, authoring tools, and interfaces into a total processing system. Since it is multi-platform based including mobile devices as well as PCs, one can easily produce complete PC and mobile contents from raw resources. We produced an example, "Sejong square" via the contents factory in order to demonstrate its effectiveness and usability.

A study of french "Ecole des Parents" for perspective model in korean parent education (한국에의 전망을 모색하기 위한 프랑스 "부모학교"의 분석)

  • Jeong, Mi-Ree
    • Korean Journal of Human Ecology
    • /
    • v.3 no.1
    • /
    • pp.1-14
    • /
    • 1994
  • This study is carried out to develope parent education methodology by analysing "${\acute{E}}cole$ des Parents" of France. To study this subject we approach by following three methodes. Firstly, we examined bibliographes to find out motivation of initiative caused from hystological and sociological circumstance. Secondly, to analyse chronological developement and modification of "${\acute{E}}cole$ des Parents" we interviewed with three periods of persons : founder, developer and actually working animators. In addition, we riviewed three hundred eighty volumes of $\underline{{\acute{E}}cole\;des\;Parents}$, from first jurnals to october, 1993. Thirdly, this study were approached by participant observation in actual activities, and by analyse statstical records and subject of articals. In results, we noticed following caractors working in french parent education system. All the regional "${\acute{E}}cole$ des Parents" are alined with "$F{\acute{E}}d{\acute{E}}ration$ Nationale des ${\acute{E}}cole$ des Parents et des ${\acute{E}}ducateurs$"(F.N.E.PE.) in educational polish but activities and methodes of education are independant with F.N.E.P.E., and in most different regional "${\acute{E}}cole$ des Parents" focussed the educational programes to the middle class families in economic point of view. these programes adjust very rapidly systematicaly to social requirment due to successed intensive research. Mordern programs tend to contain all members of family in stead of targeting only maternal members. Methodes of education by lectures and speechs only to deliver information also replaced by discussion, forum and groop animation to induce self-correction. We propose that the systems of FNEPE can be ideal models to solve many actual problems facing in Korean parent education systemes.

  • PDF

Study of Educational Insect Robot that Utilizes Mobile Augmented Reality Digilog Book (모바일 증강현실 Digilog Book을 활용한 교육용 곤충로봇 콘텐츠)

  • Park, Young-Sook;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1355-1360
    • /
    • 2014
  • In this paper, we apply the learning of the mobile robot insect augmented reality Digilog Book. In the era of electronic, book written in paper space just have moved to virtual reality space. The virtual reality, constraints spatial and physical, in the real world, it is a technique that enables to experience indirectly situation not experienced directly as user immersive experience type interface. Applied to the learning robot Digilog Book that allows the fusion of paper analog and digital content, using the augmented reality technology, to experience various interactions. Apply critical elements moving, three-dimensional images and animation to enrich the learning, for easier block assembly, designed to grasp more easily rank order between the blocks. Anywhere at any time, is capable of learning of the robot in Digilog Book to be executed by the mobile phone in particular.

A Study on XR Handball Sports for Individuals with Developmental Disabilities

  • Byong-Kwon Lee;Sang-Hwa Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.31-38
    • /
    • 2024
  • This study proposes a novel approach to enhancing the social inclusion and participation of individuals with developmental disabilities. Utilizing cutting-edge virtual reality (VR) technology, we designed and developed a metaverse simulator that enables individuals with developmental disabilities to safely and conveniently experience indoor handicapped handball sports. This simulator provides an environment where individuals with disabilities can experience and practice handball matches. For the modeling and animation of handball players, we employed advanced modeling and motion capture technologies to accurately replicate the movements required in handball matches. Additionally, we ported various training programs, including basic drills, penalty throws, and target games, onto XR (Extended Reality) devices. Through this research, we have explored the development of immersive assistive tools that enable individuals with developmental disabilities to more easily participate in activities that may be challenging in real-life scenarios. This is anticipated to broaden the scope of social participation for individuals with developmental disabilities and enhance their overall quality of life.

Evaluation of Evacuation Safety in University Libraries Based on Pathfinder

  • Zechen Zhang;Jaewook Lee;Hasung Kong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.237-246
    • /
    • 2024
  • In recent years, the frequent occurrence of fire accidents in university libraries has posed significant threats to the safety of students' lives and property, alongside negative social impacts. Accurately analyzing the factors affecting evacuation during library fires and proposing optimized measures for safe evacuation is thus crucial. This paper utilizes a specific university library as a case study, simulating fire evacuation scenarios using the Pathfinder software, to assess and validate evacuation strategies and propose relevant optimizations. Pathfinder, developed by Thunderhead Engineering in the United States, is an intuitive and straightforward personnel emergency evacuation assessment system, offering advanced visualization interfaces and 3D animation effects. This study aims to construct evacuation models and perform simulation analysis for the selected university library using Pathfinder. The library's structural layout, people flow characteristics, and the nature of fire and smoke spread are considered in the analysis. Additionally, evacuation scenarios involving different fire outbreak locations and the status of emergency exits are examined. The findings underscore the importance of effective evacuation in fire situations, highlighting how environmental conditions, individual characteristics, and behavioral patterns significantly influence evacuation efficiency. Through these investigations, the study enhances understanding and optimization of evacuation strategies in fire scenarios, thereby improving safety and efficiency. The research not only provides concrete and practical guidelines for building design, management, and emergency response planning in libraries but also offers valuable insights for the design and management of effective evacuation systems in buildings, crucial for ensuring occupant safety and minimizing loss of life in potential hazard situations

A study about the improvement plan in production processes of digital entertainment image using the motion capture system (모션캡쳐시스템을 활용한 디지털 엔터테인먼트 영상에서 제작과정상 개선 방안에 관한 연구)

  • Lee, Man-Woo;Yun, Deok-Un;Park, Jin-Seok;Kim, Soon-Gohn
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.824-828
    • /
    • 2006
  • Introduction of motion capture system to the field of digital entertainment paved the way to accelerate the development of 3D character animation all the more. Motion capture system has been developed of the level that it can capture the fierce motions of character particularly in the digital game image and improve the dynamic characteristics by capturing the movement of human muscle or express the human's true emotion by capturing wrinkles and expression on face. Such an extension of realistic expression enables them to be used increasingly in movie, TV, advertisement, music video, etc centering around the game industry in the field of digital entertainment. The fact is, however, that many difficulties are held in the image production process compared with the competing countries such as USA and Japan, owing to inferiorities in technical expertise and capital in the image production process using the local motion capture, insufficient professional human resources of motion capture and small size of local motion capture image market. Hence, this study intends to suggest the plan to improve the technical problems in terms of integrated motion capture system, motion capture professional human resources and motion capture in-house program development in the production process of digital entertainment image using the motion capture system by surveying local and overseas examples of image production.

  • PDF

Web-based Text-To-Sign Language Translating System (웹기반 청각장애인용 수화 웹페이지 제작 시스템)

  • Park, Sung-Wook;Wang, Bo-Hyeun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.265-270
    • /
    • 2014
  • Hearing-impaired people have difficulty in hearing, so it is also hard for them to learn letters that represent sound and text that conveys complex and abstract concepts. Therefore it has been natural choice for the hearing-impaired people to use sign language for communication, which employes facial expression, and hands and body motion. However, the major communication methods in daily life are text and speech, which are big obstacles for the hearing-impaired people to access information, to learn and make intellectual activities, and to get jobs. As delivering information via internet become common the hearing-impaired people are experiencing more difficulty in accessing information since internet represents information mostly in text forms. This intensifies unbalance of information accessibility. This paper reports web-based text-to-sign language translating system that helps web designer to use sign language in web page design. Since the system is web-based, if web designers are equipped with common computing environment for internet browsing, they can use the system. The web-based text-to-sign language system takes the format of bulletin board as user interface. When web designers write paragraphs and post them through the bulletin board to the translating server, the server translates the incoming text to sign language, animates with 3D avatar and records the animation in a MP4 file. The file addresses are fetched by the bulletin board and it enables web designers embed the translated sign language file into their web pages by using HTML5 or Javascript. Also we analyzed text used by web pages of public services, then figured out new words to the translating system, and added to improve translation. This addition is expected to encourage wide and easy acceptance of web pages for hearing-impaired people to public services.

On Method for LBS Multi-media Services using GML 3.0 (GML 3.0을 이용한 LBS 멀티미디어 서비스에 관한 연구)

  • Jung, Kee-Joong;Lee, Jun-Woo;Kim, Nam-Gyun;Hong, Seong-Hak;Choi, Beyung-Nam
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.169-181
    • /
    • 2004
  • SK Telecom has already constructed GIMS system as the base common framework of LBS/GIS service system based on OGC(OpenGIS Consortium)'s international standard for the first mobile vector map service in 2002, But as service content appears more complex, renovation has been needed to satisfy multi-purpose, multi-function and maximum efficiency as requirements have been increased. This research is for preparation ion of GML3-based platform to upgrade service from GML2 based GIMS system. And with this, it will be possible for variety of application services to provide location and geographic data easily and freely. In GML 3.0, it has been selected animation, event handling, resource for style mapping, topology specification for 3D and telematics services for mobile LBS multimedia service. And the schema and transfer protocol has been developed and organized to optimize data transfer to MS(Mobile Stat ion) Upgrade to GML 3.0-based GIMS system has provided innovative framework in the view of not only construction but also service which has been implemented and applied to previous research and system. Also GIMS channel interface has been implemented to simplify access to GIMS system, and service component of GIMS internals, WFS and WMS, has gotten enhanded and expanded function.

  • PDF

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

Speech Visualization of Korean Vowels Based on the Distances Among Acoustic Features (음성특징의 거리 개념에 기반한 한국어 모음 음성의 시각화)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.5
    • /
    • pp.512-520
    • /
    • 2019
  • It is quite useful to represent speeches visually for learners who study foreign languages as well as the hearing impaired who cannot directly hear speeches, and a number of researches have been presented in the literature. They remain, however, at the level of representing the characteristics of speeches using colors or showing the changing shape of lips and mouth using the animation-based representation. As a result of such approaches, those methods cannot tell the users how far their pronunciations are away from the standard ones, and moreover they make it technically difficult to develop such a system in which users can correct their pronunciation in an interactive manner. In order to address these kind of drawbacks, this paper proposes a speech visualization model based on the relative distance between the user's speech and the standard one, furthermore suggests actual implementation directions by applying the proposed model to the visualization of Korean vowels. The method extract three formants F1, F2, and F3 from speech signals and feed them into the Kohonen's SOM to map the results into 2-D screen and represent each speech as a pint on the screen. We have presented a real system implemented using the open source formant analysis software on the speech of a Korean instructor and several foreign students studying Korean language, in which the user interface was built using the Javascript for the screen display.