• Title/Summary/Keyword: multi-modal service

Search Result 31, Processing Time 0.02 seconds

Ubiquitous Context-aware Modeling and Multi-Modal Interaction Design Framework (유비쿼터스 환경의 상황인지 모델과 이를 활용한 멀티모달 인터랙션 디자인 프레임웍 개발에 관한 연구)

  • Kim, Hyun-Jeong;Lee, Hyun-Jin
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.273-282
    • /
    • 2005
  • In this study, we proposed Context Cube as a conceptual model of user context, and a Multi-modal Interaction Design framework to develop ubiquitous service through understanding user context and analyzing correlation between context awareness and multi-modality, which are to help infer the meaning of context and offer services to meet user needs. And we developed a case study to verify Context Cube's validity and proposed interaction design framework to derive personalized ubiquitous service. We could understand context awareness as information properties which consists of basic activity, location of a user and devices(environment), time, and daily schedule of a user. And it enables us to construct three-dimensional conceptual model, Context Cube. Also, we developed ubiquitous interaction design process which encloses multi-modal interaction design by studying the features of user interaction presented on Context Cube.

  • PDF

Structural damage alarming and localization of cable-supported bridges using multi-novelty indices: a feasibility study

  • Ni, Yi-Qing;Wang, Junfang;Chan, Tommy H.T.
    • Structural Engineering and Mechanics
    • /
    • v.54 no.2
    • /
    • pp.337-362
    • /
    • 2015
  • This paper presents a feasibility study on structural damage alarming and localization of long-span cable-supported bridges using multi-novelty indices formulated by monitoring-derived modal parameters. The proposed method which requires neither structural model nor damage model is applicable to structures of arbitrary complexity. With the intention to enhance the tolerance to measurement noise/uncertainty and the sensitivity to structural damage, an improved novelty index is formulated in terms of auto-associative neural networks (ANNs) where the output vector is designated to differ from the input vector while the training of the ANNs needs only the measured modal properties of the intact structure under in-service conditions. After validating the enhanced capability of the improved novelty index for structural damage alarming over the commonly configured novelty index, the performance of the improved novelty index for damage occurrence detection of large-scale bridges is examined through numerical simulation studies of the suspension Tsing Ma Bridge (TMB) and the cable-stayed Ting Kau Bridge (TKB) incurred with different types of structural damage. Then the improved novelty index is extended to formulate multi-novelty indices in terms of the measured modal frequencies and incomplete modeshape components for damage region identification. The capability of the formulated multi-novelty indices for damage region identification is also examined through numerical simulations of the TMB and TKB.

On Addressing Network Synchronization in Object Tracking with Multi-modal Sensors

  • Jung, Sang-Kil;Lee, Jin-Seok;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.4
    • /
    • pp.344-365
    • /
    • 2009
  • The performance of a tracking system is greatly increased if multiple types of sensors are combined to achieve the objective of the tracking instead of relying on single type of sensor. To conduct the multi-modal tracking, we have previously developed a multi-modal sensor-based tracking model where acoustic sensors mainly track the objects and visual sensors compensate the tracking errors [1]. In this paper, we find a network synchronization problem appearing in the developed tracking system. The problem is caused by the different location and traffic characteristics of multi-modal sensors and non-synchronized arrival of the captured sensor data at a processing server. To effectively deliver the sensor data, we propose a time-based packet aggregation algorithm where the acoustic sensor data are aggregated based on the sampling time and sent to the server. The delivered acoustic sensor data is then compensated by visual images to correct the tracking errors and such a compensation process improves the tracking accuracy in ideal case. However, in real situations, the tracking improvement from visual compensation can be severely degraded due to the aforementioned network synchronization problem, the impact of which is analyzed by simulations in this paper. To resolve the network synchronization problem, we differentiate the service level of sensor traffic based on Weight Round Robin (WRR) scheduling at the routers. The weighting factor allocated to each queue is calculated by a proposed Delay-based Weight Allocation (DWA) algorithm. From the simulations, we show the traffic differentiation model can mitigate the non-synchronization of sensor data. Finally, we analyze expected traffic behaviors of the tracking system in terms of acoustic sampling interval and visual image size.

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Multi-Modal Sensing M2M Healthcare Service in WSN

  • Chung, Wan-Young
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1090-1105
    • /
    • 2012
  • A multi-modal sensing M2M healthcare monitoring system for the continuous monitoring of patients under their natural physiological states or elderly persons with chronic diseases is summarized. The system is designed for homecare or the monitoring of the elderly who live in country side or small rest home without enough support from caregivers or doctors, instead of patient monitoring in big hospital environment. Further insights into the natural cause and progression of diseases are afforded by context-aware sensing, which includes the use of accelerometers to monitor patient activities, or by location-aware indoor tracking based on ultrasonic and RF sensing. Moreover, indoor location tracking provides information about the location of patients in their physical environment and helps the caregiver in the provision of appropriate support.

Dynamic Analysis of Carbon-fiber-reinforced Plastic for Different Multi-layered Fabric Structure (적층 직물 구조에 따른 탄소강화플라스틱 소재 동적 특성 분석)

  • Kim, Chan-Jung
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.26 no.4
    • /
    • pp.375-382
    • /
    • 2016
  • The mechanical property of a carbon-fiber-reinforced plastic (CFRP) is subjected to two elements, carbon fiber and polymer resin, in a first step and the selection of multi-layered structure is second one. Many combination of fabric layers, i.e. plainweave, twillweave, can be derived for candidates of test specimen used for a basic mechanical components so that a reliable identification of dynamic nature of possible multi-layered structures are essential during the development of CFRP based component system. In this paper, three kinds of multi-layered structure specimens were prepared and the dynamic characteristics of service specimens were conducted through classical modal test process with impact hammer. In addition, the design sensitivity analysis based on transmissibility function was applied for the measured response data so that the response sensitivity for each resonance frequency were compared for three CFRP test specimens. Finally, the evaluation of CFRP specimen over different multi-layered fabric structures are commented from the experimental consequences.

Development of the Virtual Driving Environment for the AWS ECU Test Platform of the Bi-modal Tram (저상굴절 궤도차량의 AWS ECU 테스트 플랫폼을 위한 가상 주행환경 개발)

  • Choi, Seong-Hoon;Park, Tea-Won;Lee, Soo-Ho;Moon, Kyung-Ho
    • Proceedings of the KSR Conference
    • /
    • 2007.11a
    • /
    • pp.283-290
    • /
    • 2007
  • A bi-modal tram has been developed to offer an advanced transportation service compared with existing vehicles. The All-Wheel-Steering system is applied to the bi-modal tram to satisfy the required steering performance because the bi-modal tram has extended length and articulated mechanism. An ECU for the steering system is essential to steer wheels on 2nd and 3rd axles by the specific AWS algorithm with the prescribed driving condition. The Hardware-In-the-Loop Simulation(HILS) system is planned for the purpose of evaluating the steering system of the bi-modal tram. There are kinematic links with the hydraulic actuator to steer wheels on each 2nd and 3rd axles and also same steering mechanism as the actual vehicle is in the HILS system. Controlling the movement of hydraulic actuator which reflects the lateral steering reaction force on each wheel is the key to realize the HILS system, but the reaction force is continuously changed according to various driving conditions. Therefore, the simulation through the multi-body dynamics model is used to obtain the required forces.

  • PDF

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

Development of Driver's Emotion and Attention Recognition System using Multi-modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 운전자의 감정 및 주의력 인식 기술 개발)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.754-761
    • /
    • 2008
  • As the automobile industry and technologies are developed, driver's tend to more concern about service matters than mechanical matters. For this reason, interests about recognition of human knowledge and emotion to make safe and convenient driving environment for driver are increasing more and more. recognition of human knowledge and emotion are emotion engineering technology which has been studied since the late 1980s to provide people with human-friendly services. Emotion engineering technology analyzes people's emotion through their faces, voices and gestures, so if we use this technology for automobile, we can supply drivels with various kinds of service for each driver's situation and help them drive safely. Furthermore, we can prevent accidents which are caused by careless driving or dozing off while driving by recognizing driver's gestures. the purpose of this paper is to develop a system which can recognize states of driver's emotion and attention for safe driving. First of all, we detect a signals of driver's emotion by using bio-motion signals, sleepiness and attention, and then we build several types of databases. by analyzing this databases, we find some special features about drivers' emotion, sleepiness and attention, and fuse the results through Multi-Modal method so that it is possible to develop the system.