• Title/Summary/Keyword: Multi modal AI

Search Result 20, Processing Time 0.021 seconds

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

Development of a Depression Prevention Platform using Multi-modal Emotion Recognition AI Technology (멀티모달 감정 인식 AI 기술을 이용한 우울증 예방 플랫폼 구축)

  • HyunBeen Jang;UiHyun Cho;SuYeon Kwon;Sun Min Lim;Selin Cho;JeongEun Nah
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.916-917
    • /
    • 2023
  • 본 연구는 사용자의 음성 패턴 분석과 텍스트 분류를 중심으로 이루어지는 한국어 감정 인식 작업을 개선하기 위해 Macaron Net 텍스트 모델의 결과와 MFCC 음성 모델의 결과 가중치 합을 분류하여 최종 감정을 판단하는 기존 82.9%였던 정확도를 텍스트 모델 기준 87.0%, Multi-Modal 모델 기준 88.0%로 개선한 모델을 제안한다. 해당 모델을 우울증 예방 플랫폼의 핵심 모델에 탑재하여 covid-19 팬데믹 이후 사회의 문제점으로 부상한 우울증 문제 해소에 기여 하고자 한다.

Study of the structural damage identification method based on multi-mode information fusion

  • Liu, Tao;Li, AiQun;Ding, YouLiang;Zhao, DaLiang
    • Structural Engineering and Mechanics
    • /
    • v.31 no.3
    • /
    • pp.333-347
    • /
    • 2009
  • Due to structural complicacy, structural health monitoring for civil engineering needs more accurate and effectual methods of damage identification. This study aims to import multi-source information fusion (MSIF) into structural damage diagnosis to improve the validity of damage detection. Firstly, the essential theory and applied mathematic methods of MSIF are introduced. And then, the structural damage identification method based on multi-mode information fusion is put forward. Later, on the basis of a numerical simulation of a concrete continuous box beam bridge, it is obviously indicated that the improved modal strain energy method based on multi-mode information fusion has nicer sensitivity to structural initial damage and favorable robusticity to noise. Compared with the classical modal strain energy method, this damage identification method needs much less modal information to detect structural initial damage. When the noise intensity is less than or equal to 10%, this method can identify structural initial damage well and truly. In a word, this structural damage identification method based on multi-mode information fusion has better effects of structural damage identification and good practicability to actual structures.

Emotion-based Real-time Facial Expression Matching Dialogue System for Virtual Human (감정에 기반한 가상인간의 대화 및 표정 실시간 생성 시스템 구현)

  • Kim, Kirak;Yeon, Heeyeon;Eun, Taeyoung;Jung, Moonryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.23-29
    • /
    • 2022
  • Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, and virtual humans implemented via these tools can communicate with users to some extent. However, most of the virtual humans so far have stayed unimodal using only text or speech. As AI technologies advance, the outdated machine-centered dialogue system is now changing to a human-centered, natural multi-modal system. By using several pre-trained networks, we implemented an emotion-based multi-modal dialogue system, which generates human-like utterances and displays appropriate facial expressions in real-time.

UI/UX for Generative AI (생성형 AI 용도의 UI/UX)

  • Tae-Seok Kim;Anh H. Vo;Marvin John Ignacio;Khuong G. T. Diep;Yong-Guk Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.687-690
    • /
    • 2023
  • 본 논문은 다양한 종류의 생성형 AI 용도의 UI/UX 중 텍스트 기반 UI/UX, 이미지 기반 UI/UX, 오디오 기반 UI/UX, 그리고 Multi-modal 을 기반으로 둔 UI/UX 와 같은 다양한 유형의 UI/UX 를 살펴보고 최신 기술을 활용한 미래전망에 대해 알아 보도록 한다. 현재 생성 모델은 다양한 산업 분야에서 광범위하고 다양한 응용 프로그램으로 사용되고 있으며, 최근 연구자와 실무자들로부터 상당한 관심을 받고 있다.생성형 AI 용도의 UI/UX 를 사용하면 생활에 편리해지며 시간과 돈이 매우 절약이 된다. 특히 사용자들이 편안하게 사용할 수 있는 생성형 AI 의 UI/UX 대한 연구방향에 대해 알아 보도록 한다.

Proposal for AI Video Interview Using Image Data Analysis

  • Park, Jong-Youel;Ko, Chang-Bae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.212-218
    • /
    • 2022
  • In this paper, the necessity of AI video interview arises when conducting an interview for acquisition of excellent talent in a non-face-to-face situation due to similar situations such as Covid-19. As a matter to be supplemented in general AI interviews, it is difficult to evaluate the reliability and qualitative factors. In addition, the AI interview is conducted not in a two-way Q&A, rather in a one-sided Q&A process. This paper intends to fuse the advantages of existing AI interviews and video interviews. When conducting an interview using AI image analysis technology, it supplements subjective information that evaluates interview management and provides quantitative analysis data and HR expert data. In this paper, image-based multi-modal AI image analysis technology, bioanalysis-based HR analysis technology, and web RTC-based P2P image communication technology are applied. The goal of applying this technology is to propose a method in which biological analysis results (gaze, posture, voice, gesture, landmark) and HR information (opinions or features based on user propensity) can be processed on a single screen to select the right person for the hire.

Gait Type Classification Using Multi-modal Ensemble Deep Learning Network

  • Park, Hee-Chan;Choi, Young-Chan;Choi, Sang-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.29-38
    • /
    • 2022
  • This paper proposes a system for classifying gait types using an ensemble deep learning network for gait data measured by a smart insole equipped with multi-sensors. The gait type classification system consists of a part for normalizing the data measured by the insole, a part for extracting gait features using a deep learning network, and a part for classifying the gait type by inputting the extracted features. Two kinds of gait feature maps were extracted by independently learning networks based on CNNs and LSTMs with different characteristics. The final ensemble network classification results were obtained by combining the classification results. For the seven types of gait for adults in their 20s and 30s: walking, running, fast walking, going up and down stairs, and going up and down hills, multi-sensor data was classified into a proposed ensemble network. As a result, it was confirmed that the classification rate was higher than 90%.

Parameter-Efficient Multi-Modal Highlight Detection via Prompting (Prompting 기반 매개변수 효율적인 멀티 모달 영상 하이라이트 검출 연구)

  • DongHoon Han;Seong-Uk Nam;Eunhwan Park;Nojun Kwak
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.372-376
    • /
    • 2023
  • 본 연구에서는 비디오 하이라이트 검출 및 장면 추출을 위한 경량화된 모델인 Visual Context Learner (VCL)을 제안한다. 기존 연구에서는 매개변수가 고정된 CLIP을 비롯한 여러 피쳐 추출기에 학습 가능한 DETR과 같은 트랜스포머를 이어붙여서 학습을 한다. 하지만 본 연구는 경량화된 구조로 하이라이트 검출 성능을 개선시킬 수 있음을 보인다. 그리고 해당 형태로 장면 추출도 가능함을 보이며 장면 추출의 추가 연구 가능성을 시사한다. VCL은 매개변수가 고정된 CLIP에 학습가능한 프롬프트와 MLP로 하이라이트 검출과 장면 추출을 진행한다. 총 2,141개의 학습가능한 매개변수를 사용하여 하이라이트 검출의 HIT@1(>=Very Good) 성능을 기존 CLIP보다 2.71% 개선된 성능과 최소한의 장면 추출 성능을 보인다.

  • PDF

A Design of AI Cloud Platform for Safety Management on High-risk Environment (고위험 현장의 안전관리를 위한 AI 클라우드 플랫폼 설계)

  • Ki-Bong, Kim
    • Journal of Advanced Technology Convergence
    • /
    • v.1 no.2
    • /
    • pp.01-09
    • /
    • 2022
  • Recently, safety issues in companies and public institutions are no longer a task that can be postponed, and when a major safety accident occurs, not only direct financial loss, but also indirect loss of social trust in the company and public institution is greatly increased. In particular, in the case of a fatal accident, the damage is even more serious. Accordingly, as companies and public institutions expand their investments in industrial safety education and prevention, open AI learning model creation technology that enables safety management services without being affected by user behavior in industrial sites where high-risk situations exist, edge terminals System development using inter-AI collaboration technology, cloud-edge terminal linkage technology, multi-modal risk situation determination technology, and AI model learning support technology is underway. In particular, with the development and spread of artificial intelligence technology, research to apply the technology to safety issues is becoming active. Therefore, in this paper, an open cloud platform design method that can support AI model learning for high-risk site safety management is presented.

Audio Generative AI Usage Pattern Analysis by the Exploratory Study on the Participatory Assessment Process

  • Hanjin Lee;Yeeun Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.47-54
    • /
    • 2024
  • The importance of cultural arts education utilizing digital tools is increasing in terms of enhancing tech literacy, self-expression, and developing convergent capabilities. The creation process and evaluation of innovative multi-modal AI, provides expanded creative audio-visual experiences in users. In particular, the process of creating music with AI provides innovative experiences in all areas, from musical ideas to improving lyrics, editing and variations. In this study, we attempted to empirically analyze the process of performing tasks using an Audio and Music Generative AI platform and discussing with fellow learners. As a result, 12 services and 10 types of evaluation criteria were collected through voluntary participation, and divided into usage patterns and purposes. The academic, technological, and policy implications were presented for AI-powered liberal arts education with learners' perspectives.