• Title/Summary/Keyword: 멀티 모달 데이터

Search Result 106, Processing Time 0.029 seconds

A News Video Mining based on Multi-modal Approach and Text Mining (멀티모달 방법론과 텍스트 마이닝 기반의 뉴스 비디오 마이닝)

  • Lee, Han-Sung;Im, Young-Hee;Yu, Jae-Hak;Oh, Seung-Geun;Park, Dai-Hee
    • Journal of KIISE:Databases
    • /
    • v.37 no.3
    • /
    • pp.127-136
    • /
    • 2010
  • With rapid growth of information and computer communication technologies, the numbers of digital documents including multimedia data have been recently exploded. In particular, news video database and news video mining have became the subject of extensive research, to develop effective and efficient tools for manipulation and analysis of news videos, because of their information richness. However, many research focus on browsing, retrieval and summarization of news videos. Up to date, it is a relatively early state to discover and to analyse the plentiful latent semantic knowledge from news videos. In this paper, we propose the news video mining system based on multi-modal approach and text mining, which uses the visual-textual information of news video clips and their scripts. The proposed system systematically constructs a taxonomy of news video stories in automatic manner with hierarchical clustering algorithm which is one of text mining methods. Then, it multilaterally analyzes the topics of news video stories by means of time-cluster trend graph, weighted cluster growth index, and network analysis. To clarify the validity of our approach, we analyzed the news videos on "The Second Summit of South and North Korea in 2007".

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

Design of Big Semantic System for Factory Energy Management in IoE environments (IoE 환경에서 공장에너지 관리를 위한 빅시맨틱 시스템 설계)

  • Kwon, Soon-Hyun;Lee, Joa-Hyoung;Kim, Seon-Hyeog;Lee, Sang-Keum;Shin, Young-Mee;Doh, Yoon-Mee;Heo, Tae-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.37-39
    • /
    • 2022
  • 기존 IoE 환경에서 수집데이터는 특정 서비스를 위한 도메인 지식과 연계되어 서비스를 제공한다. 하지만 수집되는 데이터의 유형이 다양하고, 정적인 지식베이스가 상황에 따라 동적으로 변화하는 IoE 환경에서는 기존의 지식베이스 시스템을 통하여 원활한 서비스를 제공할 수 없었다. 따라서, 본 논문에서는 IoE 환경에서 발생하는 대용량/실시간성 데이터를 시맨틱으로 처리하여 공통 도메인 지식베이스와 연계하고 기존의 지식베이스 추론 방법과 기계학습 기반 지식 임베딩 기법을 통하여 지식 증강을 유기적으로 진행하는 빅시맨틱 시스템을 제시한다. 제시한 시스템은 IoE 환경의 멀티모달(정형, 비정형) 데이터를 수집하고 반자동적으로 시맨틱 변환을 수행하여 도메인 지식베이스에 저장하고, 시맨틱 추론을 통해 지식베이스를 증강 시키며 증강된 지식베이스를 포함한 전체 지식베이스를 정형 및 반정형 사용자 쿼리를 통해 지식정보를 사용자에게 제공한다. 또한, 기계학습 기반 지식 임베딩 기법을 통해 학습·예측을 함으로써, 기존의 지식베이스를 증강하는 기능을 수행한다. 본 논문에서 제시한 시스템은 공장내의 에너지 정보를 수집하여 공정 및 설비 상태 및 운영정보를 바탕으로 실시간 제어를 통한 에너지 절감 시스템인 공장 에너지 관리 시스템의 기반 기술로 구현될 예정이다.

Human body learning system using multimodal and user-centric interfaces (멀티모달 사용자 중심 인터페이스를 적용한 인체 학습 시스템)

  • Kim, Ki-Min;Kim, Jae-Il;Park, Jin-Ah
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.85-90
    • /
    • 2008
  • This paper describes the human body learning system using the multi-modal user interface. Through our learning system, students can study about human anatomy interactively. The existing learning methods use the one-way materials like images, text and movies. But we propose the new learning system that includes 3D organ surface models, haptic interface and the hierarchical data structure of human organs to serve enhanced learning that utilizes sensorimotor skills.

  • PDF

Audio and Image based Emotion Recognition Framework on Real-time Video Streaming (실시간 동영상 스트리밍 환경에서 오디오 및 영상기반 감정인식 프레임워크)

  • Bang, Jaehun;Lim, Ho Jun;Lee, Sungyoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.1108-1111
    • /
    • 2017
  • 최근 감정인식 기술은 다양한 IoT 센서 디바이스의 등장으로 단일 소스기반의 감정인식 기술 연구에서 멀티모달 센서기반 감정인식 연구로 변화하고 있으며, 특히 오디오와 영상을 이용한 감정인식 기술의 연구가 활발하게 진행되는 있다. 기존의 오디오 및 영상기반 감정신 연구는 두 개의 센서 테이터를 동시에 입력 저장한 오픈 데이터베이스를 활용하여 다른 이벤트 처리 없이 각각의 데이터에서 특징을 추출하고 하나의 분류기를 통해 감정을 인식한다. 이러한 기법은 사람이 말하지 않는 구간, 얼굴이 보이지 않는 구간의 이벤트 정보처리에 대한 대처가 떨어지고 두 개의 정보를 종합하여 하나의 감정도 도출하는 디시전 레벨의 퓨저닝 연구가 부족하다. 본 논문에서는 이러한 문제를 해결하기 위해 오디오 및 영상에 내포되어 있는 이벤트 정보를 추출하고 오디오 및 영상 기반의 분리된 인지모듈을 통해 감정들을 인식하며, 도출된 감정들을 시간단위로 통합하여 디시전 퓨전하는 실시간 오디오 및 영상기반의 감정인식 프레임워크를 제안한다.

Ontology-Based Dynamic Context Management and Spatio-Temporal Reasoning for Intelligent Service Robots (지능형 서비스 로봇을 위한 온톨로지 기반의 동적 상황 관리 및 시-공간 추론)

  • Kim, Jonghoon;Lee, Seokjun;Kim, Dongha;Kim, Incheol
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1365-1375
    • /
    • 2016
  • One of the most important capabilities for autonomous service robots working in living environments is to recognize and understand the correct context in dynamically changing environment. To generate high-level context knowledge for decision-making from multiple sensory data streams, many technical problems such as multi-modal sensory data fusion, uncertainty handling, symbolic knowledge grounding, time dependency, dynamics, and time-constrained spatio-temporal reasoning should be solved. Considering these problems, this paper proposes an effective dynamic context management and spatio-temporal reasoning method for intelligent service robots. In order to guarantee efficient context management and reasoning, our algorithm was designed to generate low-level context knowledge reactively for every input sensory or perception data, while postponing high-level context knowledge generation until it was demanded by the decision-making module. When high-level context knowledge is demanded, it is derived through backward spatio-temporal reasoning. In experiments with Turtlebot using Kinect visual sensor, the dynamic context management and spatio-temporal reasoning system based on the proposed method showed high performance.

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

Real-time Background Music System for Immersive Dialogue in Metaverse based on Dialogue Emotion (메타버스 대화의 몰입감 증진을 위한 대화 감정 기반 실시간 배경음악 시스템 구현)

  • Kirak Kim;Sangah Lee;Nahyeon Kim;Moonryul Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.4
    • /
    • pp.1-6
    • /
    • 2023
  • To enhance immersive experiences for metaverse environements, background music is often used. However, the background music is mostly pre-matched and repeated which might occur a distractive experience to users as it does not align well with rapidly changing user-interactive contents. Thus, we implemented a system to provide a more immersive metaverse conversation experience by 1) developing a regression neural network that extracts emotions from an utterance using KEMDy20, the Korean multimodal emotion dataset 2) selecting music corresponding to the extracted emotions from an utterance by the DEAM dataset where music is tagged with arousal-valence levels 3) combining it with a virtual space where users can have a real-time conversation with avatars.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

Multi-modal Image Processing for Improving Recognition Accuracy of Text Data in Images (이미지 내의 텍스트 데이터 인식 정확도 향상을 위한 멀티 모달 이미지 처리 프로세스)

  • Park, Jungeun;Joo, Gyeongdon;Kim, Chulyun
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.148-158
    • /
    • 2018
  • The optical character recognition (OCR) is a technique to extract and recognize texts from images. It is an important preprocessing step in data analysis since most actual text information is embedded in images. Many OCR engines have high recognition accuracy for images where texts are clearly separable from background, such as white background and black lettering. However, they have low recognition accuracy for images where texts are not easily separable from complex background. To improve this low accuracy problem with complex images, it is necessary to transform the input image to make texts more noticeable. In this paper, we propose a method to segment an input image into text lines to enable OCR engines to recognize each line more efficiently, and to determine the final output by comparing the recognition rates of CLAHE module and Two-step module which distinguish texts from background regions based on image processing techniques. Through thorough experiments comparing with well-known OCR engines, Tesseract and Abbyy, we show that our proposed method have the best recognition accuracy with complex background images.