• Title/Summary/Keyword: multi-modal learning

Search Result 48, Processing Time 0.032 seconds

Digital Mirror System with Machine Learning and Microservices (머신 러닝과 Microservice 기반 디지털 미러 시스템)

  • Song, Myeong Ho;Kim, Soo Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.267-280
    • /
    • 2020
  • Mirror is a physical reflective surface, typically of glass coated with a metal amalgam, and it is to reflect an image clearly. They are available everywhere anytime and become an essential tool for us to observe our faces and appearances. With the advent of modern software technology, we are motivated to enhance the reflection capability of mirrors with the convenience and intelligence of realtime processing, microservices, and machine learning. In this paper, we present a development of Digital Mirror System that provides the realtime reflection functionality as mirror while providing additional convenience and intelligence including personal information retrieval, public information retrieval, appearance age detection, and emotion detection. Moreover, it provides a multi-model user interface of touch-based, voice-based, and gesture-based. We present our design and discuss how it can be implemented with current technology to deliver the realtime mirror reflection while providing useful information and machine learning intelligence.

Monitoring Mood Trends of Twitter Users using Multi-modal Analysis method of Texts and Images (텍스트 및 영상의 멀티모달분석을 이용한 트위터 사용자의 감성 흐름 모니터링 기술)

  • Kim, Eun Yi;Ko, Eunjeong
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.1
    • /
    • pp.419-431
    • /
    • 2018
  • In this paper, we propose a novel method for monitoring mood trend of Twitter users by analyzing their daily tweets for a long period. Then, to more accurately understand their tweets, we analyze all types of content in tweets, i.e., texts and emoticons, and images, thus develop a multimodal sentiment analysis method. In the proposed method, two single-modal analyses first are performed to extract the users' moods hidden in texts and images: a lexicon-based and learning-based text classifier and a learning-based image classifier. Thereafter, the extracted moods from the respective analyses are combined into a tweet mood and aggregated a daily mood. As a result, the proposed method generates a user daily mood flow graph, which allows us for monitoring the mood trend of users more intuitively. For evaluation, we perform two sets of experiment. First, we collect the data sets of 40,447 data. We evaluate our method via comparing the state-of-the-art techniques. In our experiments, we demonstrate that the proposed multimodal analysis method outperforms other baselines and our own methods using text-based tweets or images only. Furthermore, to evaluate the potential of the proposed method in monitoring users' mood trend, we tested the proposed method with 40 depressive users and 40 normal users. It proves that the proposed method can be effectively used in finding depressed users.

Multi-modal Meteorological Data Fusion based on Self-supervised Learning for Graph (Self-supervised Graph Learning을 통한 멀티모달 기상관측 융합)

  • Hyeon-Ju Jeon;Jeon-Ho Kang;In-Hyuk Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.589-591
    • /
    • 2023
  • 현재 수치예보 시스템은 항공기, 위성 등 다양한 센서에서 얻은 다종 관측 데이터를 동화하여 대기 상태를 추정하고 있지만, 관측변수 또는 물리량이 서로 다른 관측들을 처리하기 위한 계산 복잡도가 매우 높다. 본 연구에서 기존 시스템의 계산 효율성을 개선하여 관측을 평가하거나 전처리하는 데에 효율적으로 활용하기 위해, 각 관측의 특성을 고려한 자기 지도학습 방법을 통해 멀티모달 기상관측으로부터 실제 대기 상태를 추정하는 방법론을 제안하고자 한다. 비균질적으로 수집되는 멀티모달 기상관측 데이터를 융합하기 위해, (i) 기상관측의 heterogeneous network를 구축하여 개별 관측의 위상정보를 표현하고, (ii) pretext task 기반의 self-supervised learning을 바탕으로 개별 관측의 특성을 표현한다. (iii) Graph neural network 기반의 예측 모델을 통해 실제에 가까운 대기 상태를 추정한다. 제안하는 모델은 대규모 수치 시뮬레이션 시스템으로 수행되는 기존 기술의 한계점을 개선함으로써, 이상 관측 탐지, 관측의 편차 보정, 관측영향 평가 등 관측 전처리 기술로 활용할 수 있다.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Exploration of the Strategy in Constructing Visualization Used by Pre-service Elementary School Teachers in Making Science Video Clip for Flipped Learning - Focusing on Earth Science - (Flipped Learning을 위해 제작한 과학 학습 동영상에서 초등예비교사들이 사용한 시각화 구성 전략 탐색 - 지구 영역을 중심으로 -)

  • Ko, Min Seok
    • Journal of The Korean Association For Science Education
    • /
    • v.35 no.2
    • /
    • pp.231-245
    • /
    • 2015
  • Flipped learning can be used as an innovative teaching method in science education. This study analyzes video clip produced by pre-service elementary school teachers for flipped learning and explore strategies to organize effective visualization. The pre-service elementary school teachers focused on providing information on macroscopic natural phenomenon using concrete case selection strategy for earth science class. They used marker and spatial transformation elements effectively, but their efforts to link the elements to the experience of students were not sufficient. In addition, it was very rare to put the contents into simplified drawing or provide extreme cases to enhance the imagery of students. In addition, it is necessary to provide specific case of multi-modal and link the material to the experience of students closely through familiar cases or analogical model to establish an effective visual teaching material. It may also be needed to present simplified drawing for enhancing imagery and provide extreme cases to make students have an opportunity to infer a new situation.

Multi-modal Representation Learning for Classification of Imported Goods (수입물품의 품목 분류를 위한 멀티모달 표현 학습)

  • Apgil Lee;Keunho Choi;Gunwoo Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.203-214
    • /
    • 2023
  • The Korea Customs Service is efficiently handling business with an electronic customs system that can effectively handle one-stop business. This is the case and a more effective method is needed. Import and export require HS Code (Harmonized System Code) for classification and tax rate application for all goods, and item classification that classifies the HS Code is a highly difficult task that requires specialized knowledge and experience and is an important part of customs clearance procedures. Therefore, this study uses various types of data information such as product name, product description, and product image in the item classification request form to learn and develop a deep learning model to reflect information well based on Multimodal representation learning. It is expected to reduce the burden of customs duties by classifying and recommending HS Codes and help with customs procedures by promptly classifying items.

A cough detection used multi modal learning (멀티 모달 학습을 이용한 기침 탐지)

  • Choi, Hyung-Tak;Back, Moon-Ki;Kang, Jae-Sik;Lee, Kyu-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.439-441
    • /
    • 2018
  • 딥 러닝의 높은 성능으로 여러 분야에 사용되며 기침 탐지에서도 수행된다. 이 때 기침과 유사한 재채기, 큰 소리는 단일 데이터만으로는 구분하기에 한계가 있다. 본 논문에서는 기존의 오디오 데이터와 오디오 데이터를 인코딩 한 스펙트로그램 이미지 데이터를 함께 학습하는 멀티 모달 딥 러닝을 적용하는 방법을 사용한다.

A Design of AI Cloud Platform for Safety Management on High-risk Environment (고위험 현장의 안전관리를 위한 AI 클라우드 플랫폼 설계)

  • Ki-Bong, Kim
    • Journal of Advanced Technology Convergence
    • /
    • v.1 no.2
    • /
    • pp.01-09
    • /
    • 2022
  • Recently, safety issues in companies and public institutions are no longer a task that can be postponed, and when a major safety accident occurs, not only direct financial loss, but also indirect loss of social trust in the company and public institution is greatly increased. In particular, in the case of a fatal accident, the damage is even more serious. Accordingly, as companies and public institutions expand their investments in industrial safety education and prevention, open AI learning model creation technology that enables safety management services without being affected by user behavior in industrial sites where high-risk situations exist, edge terminals System development using inter-AI collaboration technology, cloud-edge terminal linkage technology, multi-modal risk situation determination technology, and AI model learning support technology is underway. In particular, with the development and spread of artificial intelligence technology, research to apply the technology to safety issues is becoming active. Therefore, in this paper, an open cloud platform design method that can support AI model learning for high-risk site safety management is presented.

Fashion attribute-based mixed reality visualization service (패션 속성기반 혼합현실 시각화 서비스)

  • Yoo, Yongmin;Lee, Kyounguk;Kim, Kyungsun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.2-5
    • /
    • 2022
  • With the advent of deep learning and the rapid development of ICT (Information and Communication Technology), research using artificial intelligence is being actively conducted in various fields of society such as politics, economy, and culture and so on. Deep learning-based artificial intelligence technology is subdivided into various domains such as natural language processing, image processing, speech processing, and recommendation system. In particular, as the industry is advanced, the need for a recommendation system that analyzes market trends and individual characteristics and recommends them to consumers is increasingly required. In line with these technological developments, this paper extracts and classifies attribute information from structured or unstructured text and image big data through deep learning-based technology development of 'language processing intelligence' and 'image processing intelligence', and We propose an artificial intelligence-based 'customized fashion advisor' service integration system that analyzes trends and new materials, discovers 'market-consumer' insights through consumer taste analysis, and can recommend style, virtual fitting, and design support.

  • PDF

Audio Generative AI Usage Pattern Analysis by the Exploratory Study on the Participatory Assessment Process

  • Hanjin Lee;Yeeun Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.47-54
    • /
    • 2024
  • The importance of cultural arts education utilizing digital tools is increasing in terms of enhancing tech literacy, self-expression, and developing convergent capabilities. The creation process and evaluation of innovative multi-modal AI, provides expanded creative audio-visual experiences in users. In particular, the process of creating music with AI provides innovative experiences in all areas, from musical ideas to improving lyrics, editing and variations. In this study, we attempted to empirically analyze the process of performing tasks using an Audio and Music Generative AI platform and discussing with fellow learners. As a result, 12 services and 10 types of evaluation criteria were collected through voluntary participation, and divided into usage patterns and purposes. The academic, technological, and policy implications were presented for AI-powered liberal arts education with learners' perspectives.