• Title/Summary/Keyword: multimodal artificial intelligence

Search Result 28, Processing Time 0.021 seconds

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging (의료영상에서 생성형 인공지능과 대형 언어 모델 입문)

  • Kiduk Kim;Gil-Sun Hong;Namkug Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.85 no.5
    • /
    • pp.848-860
    • /
    • 2024
  • The recent advent of large language models (LLMs), such as ChatGPT, has drawn attention to generative artificial intelligence (AI) in a number of fields. Generative AI can produce different types of data including text, images, and voice, depending on the training methods and datasets used. Additionally, recent advancements in multimodal techniques, which can simultaneously process multiple data types like text and images, have expanded the potential of using multimodal generative AI in the medical environment where various types of clinical and imaging information are used together. This review summarizes the concepts and types of LLMs, image generative AI, and multimodal AI, and it examines the status and future possibilities of generative AI in the field of radiology.

Design of Lightweight Artificial Intelligence System for Multimodal Signal Processing (멀티모달 신호처리를 위한 경량 인공지능 시스템 설계)

  • Kim, Byung-Soo;Lee, Jea-Hack;Hwang, Tae-Ho;Kim, Dong-Sun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.1037-1042
    • /
    • 2018
  • The neuromorphic technology has been researched for decades, which learns and processes the information by imitating the human brain. The hardware implementations of neuromorphic systems are configured with highly parallel processing structures and a number of simple computational units. It can achieve high processing speed, low power consumption, and low hardware complexity. Recently, the interests of the neuromorphic technology for low power and small embedded systems have been increasing rapidly. To implement low-complexity hardware, it is necessary to reduce input data dimension without accuracy loss. This paper proposed a low-complexity artificial intelligent engine which consists of parallel neuron engines and a feature extractor. A artificial intelligent engine has a number of neuron engines and its controller to process multimodal sensor data. We verified the performance of the proposed neuron engine including the designed artificial intelligent engines, the feature extractor, and a Micro Controller Unit(MCU).

Dialogue based multimodal dataset including various labels for machine learning research (대화를 중심으로 다양한 멀티모달 융합정보를 포함하는 동영상 기반 인공지능 학습용 데이터셋 구축)

  • Shin, Saim;Jang, Jinyea;Kim, Boen;Park, Hanmu;Jung, Hyedong
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.449-453
    • /
    • 2019
  • 미디어방송이 다양해지고, 웹에서 소비되는 콘텐츠들 또한 멀티미디어 중심으로 재편되는 경향에 힘입어 인공지능 연구에 멀티미디어 콘텐츠를 적극적으로 활용하고자 하는 시도들이 시작되고 있다. 본 논문은 다양한 형태의 멀티모달 정보를 하나의 동영상 콘텐츠에 연계하여 분석하여, 통합된 형태의 융합정보 데이터셋을 구축한 연구를 소개하고자 한다. 구축한 인공지능 학습용 데이터셋은 영상/음성/언어 정보가 함께 있는 멀티모달 콘텐츠에 상황/의도/감정 정보 추론에 필요한 다양한 의미정보를 부착하여 활용도가 높은 인공지능 영상 데이터셋을 구축하여 공개하였다. 본 연구의 결과물은 한국어 대화처리 연구에 부족한 공개 데이터 문제를 해소하는데 기여하였고, 한국어를 중심으로 다양한 상황 정보가 함께 구축된 데이터셋을 통하여 다양한 상황 분석 기반 대화 서비스 응용 기술 연구에 활용될 것으로 기대할 수 있다.

  • PDF

Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning

  • Gil-Sun Hong;Miso Jang;Sunggu Kyung;Kyungjin Cho;Jiheon Jeong;Grace Yoojin Lee;Keewon Shin;Ki Duk Kim;Seung Min Ryu;Joon Beom Seo;Sang Min Lee;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.24 no.11
    • /
    • pp.1061-1080
    • /
    • 2023
  • Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.

Trend of Technology for Outdoor Security Robots based on Multimodal Sensors (멀티모달 센서 기반 실외 경비로봇 기술 개발 현황)

  • Chang, J.H.;Na, K.I.;Shin, H.C.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • With the development of artificial intelligence, many studies have focused on evaluating abnormal situations by using various sensors, as industries try to automate some of the surveillance and security tasks traditionally performed by humans. In particular, mobile robots using multimodal sensors are being used for pilot operations aimed at helping security robots cope with various outdoor situations. Multiagent systems, which combine fixed and mobile systems, can provide more efficient coverage (than that provided by other systems), but network bottlenecks resulting from increased data processing and communication are encountered. In this report, we will examine recent trends in object recognition and abnormal-situation determination in various changing outdoor security robot environments, and describe an outdoor security robot platform that operates as a multiagent equipped with a multimodal sensor.

Building a multimodal task-oriented dialogue task for panic disorder counseling (공황장애 상담을 위한 멀티모달 과제 지향 대화 태스크 구축)

  • Subin Kim;Gary Geunbae Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.258-262
    • /
    • 2023
  • 과제 지향 대화 시스템은 발화 의도 및 요구사항을 파악하여 사용자가 원하는 과제를 달성한다는 점에서 유용하다. 대화 상태 추적은 과제 지향 대화 시스템의 핵심 모듈이며, 최근에는 텍스트뿐만 아니라 시각 정보까지 활용하여 대화 상태를 추적하는 멀티모달 대화 상태 추적 연구가 활발히 진행되는 중이다. 본 논문에서는 멀티모달 공황장애 상담 대화 속 내담자의 상태를 추적하는 과제를 제안하였다. ChatGPT를 통한 멀티모달 공황장애 상담 과제 지향 대화 데이터셋 구축 프레임워크와, 구축한 데이터셋의 품질을 증명하기 위한 분석도 함께 제시하였다. 사전학습 언어 모델인 GPT-2를 벤치마크 데이터셋에 대해 학습한 성능을 측정함으로써 향후 멀티모달 대화 추적 성능이 능가해야 할 베이스라인 성능을 제시하였다.

  • PDF

Technical Trends in Artificial Intelligence for Robotics Based on Large Language Models (거대언어모델 기반 로봇 인공지능 기술 동향 )

  • J. Lee;S. Park;N.W. Kim;E. Kim;S.K. Ko
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.1
    • /
    • pp.95-105
    • /
    • 2024
  • In natural language processing, large language models such as GPT-4 have recently been in the spotlight. The performance of natural language processing has advanced dramatically driven by an increase in the number of model parameters related to the number of acceptable input tokens and model size. Research on multimodal models that can simultaneously process natural language and image data is being actively conducted. Moreover, natural-language and image-based reasoning capabilities of large language models is being explored in robot artificial intelligence technology. We discuss research and related patent trends in robot task planning and code generation for robot control using large language models.