• Title/Summary/Keyword: multimodal

Search Result 657, Processing Time 0.025 seconds

Adaptive Multimodal In-Vehicle Information System for Safe Driving

  • Park, Hye Sun;Kim, Kyong-Ho
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.626-636
    • /
    • 2015
  • This paper proposes an adaptive multimodal in-vehicle information system for safe driving. The proposed system filters input information based on both the priority assigned to the information and the given driving situation, to effectively manage input information and intelligently provide information to the driver. It then interacts with the driver using an adaptive multimodal interface by considering both the driving workload and the driver's cognitive reaction to the information it provides. It is shown experimentally that the proposed system can promote driver safety and enhance a driver's understanding of the information it provides by filtering the input information. In addition, the system can reduce a driver's workload by selecting an appropriate modality and corresponding level with which to communicate. An analysis of subjective questionnaires regarding the proposed system reveals that more than 85% of the respondents are satisfied with it. The proposed system is expected to provide prioritized information through an easily understood modality.

Designing a Framework of Multimodal Contents Creation and Playback System for Immersive Textbook (실감형 교과서를 위한 멀티모달 콘텐츠 저작 및 재생 프레임워크 설계)

  • Kim, Seok-Yeol;Park, Jin-Ah
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.1-10
    • /
    • 2010
  • For virtual education, the multimodal learning environment with haptic feedback, termed 'immersive textbook', is necessary to enhance the learning effectiveness. However, the learning contents for immersive textbook are not widely available due to the constraints in creation and playback environments. To address this problem, we propose a framework for producing and displaying the multimodal contents for immersive textbook. Our framework provides an XML-based meta-language to produce the multimodal learning contents in the form of intuitive script. Thus it can help the user, without any prior knowledge of multimodal interactions, produce his or her own learning contents. The contents are then interpreted by script engine and delivered to the user by visual and haptic rendering loops. Also we implemented a prototype based on the aforementioned proposals and performed user evaluation to verify the validity of our framework.

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

The Effectiveness of Additional Treatment Modalities after the Failure of Recanalization by Thrombectomy Alone in Acute Vertebrobasilar Arterial Occlusion

  • Kim, Seong Mook;Sohn, Sung-Il;Hong, Jeong-Ho;Chang, Hyuk-Won;Lee, Chang-Young;Kim, Chang-Hyun
    • Journal of Korean Neurosurgical Society
    • /
    • v.58 no.5
    • /
    • pp.419-425
    • /
    • 2015
  • Objective : Acute vertebrobasilar artery occlusion (AVBAO) is a devastating disease with a high mortality rate. One of the most important factors affecting favorable clinical outcome is early recanalization. Mechanical thrombectomy is an emerging treatment strategy for achieving a high recanalization rates. However, thrombectomy alone can be insufficient to complete recanalization, especially for acute stroke involving large artery atheromatous disease. The purpose of this study is to investigate the safety and efficacy of mechanical thrombectomy in AVBAO. Methods : Fourteen consecutive patients with AVBAO were treated with mechanical thrombectomy. Additional multimodal treatments were intra-arterial (IA) thrombolysis, balloon angioplasty, or permanent stent placement. Recanalization by thrombectomy alone and multimodal treatments were assessed by the Thrombolysis in Cerebral Infarction (TICI) score. Clinical outcome was determined using the National Institutes of Health Stroke Scale (NIHSS) at 7 days and the modified Rankin Scale (mRS) at 3 months. Results : Thrombectomy alone and multimodal treatments were performed in 10 patients (71.4%) and 4 patients (28.6%), respectively. Successful recanalization (TICI 2b-3) was achieved in 11 (78.6%). Among these 11 patients, 3 (27.3%) underwent multimodal treatment due to underlying atherosclerotic stenosis. Ten (71.4%) of the 14 showed NIHSS score improvement of >10. Overall mortality was 3 (21.4%) of 14. Conclusion : We suggest that mechanical thrombectomy is safe and effective for improving recanalization rates in AVBAO, with low complication rates. Also, in carefully selected patients after the failure of recanalization by thrombectomy alone, additional multimodal treatment such as IA thrombolysis, balloons, or stents can be needed to achieve successful recanalization.

Multimodal based Storytelling Experience Using Virtual Reality in Museum (가상현실을 이용한 박물관 내 멀티모달 스토리텔링 경험 연구)

  • Lee, Ji-Hye
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.10
    • /
    • pp.11-19
    • /
    • 2018
  • This paper is about multimodal storytelling experience applying Virtual Reality technology in museum. Specifically, this research argues virtual reality in both intuitive understanding of history also multimodal experience in the space. This research investigates cases regarding use of virtual reality in museum sector. As a research method, this paper conducts a literature review regarding multimodal experience and examples applying virtual reality related technologies in museum. Based on the literature review to investigate the concept necessary with its related cases. Based on the investigation, this paper suggests constructing elements for multimodal storytelling based on VR. Ultimately, this paper suggests the elements of building VR storytelling where dynamic audio-visual and interaction mode combines with historical resources for diverse audiences.

Usefulness of Morphine in the Periarticular Multimodal Drug Local Injection after Surgery for Hallux Valgus (무지 외반증 수술에서 관절 주위 다중 약물 국소 투여 시 Morphine의 유용성)

  • Cho, Jae Ho;Choi, Hong Joon;Kim, Yu Mi;Kim, Jae Young;Wang, Bae Gun;Lee, Woo Chun
    • Journal of Korean Foot and Ankle Society
    • /
    • v.17 no.2
    • /
    • pp.93-99
    • /
    • 2013
  • Purpose: Proximal metatarsal chevron osteotomy for hallux valgus is followed by significant amount of postoperative pain. Periarticular multimodal drug local injection can be an option for pain control. This study was attempted to evaluate the efficacy of the morphine as multimodal drug and to confirm the effect of periarticular multimodal drug local injection on controlling early postoperative pain. Materials and Methods: Between March 2012 and June 2012, 22 patients received proximal metatarsal chevron osteotomy for the correction of hallux valgus deformity. 10 patients (Group A) received periarticular injection of the test solution made with morphine, ropivacaine, ephinephrine and ketorolac. 12 patients (Group B) received periarticular injection of the test solution without morphine. The visual analog scale (VAS) was checked at 2, 4, 6, 8 hours, 1 day and 2 days each after surgery. Results: The VAS score at postoperative 2 hours to 1 day between two groups showed no significant difference, but the VAS score at postoperative 2 days was significantly higher in Group A compared to the VAS score of group B. The amount of additional pain control (tramadol HCL) between two groups showed no significant difference for 3 days after surgery. Conclusion: Periarticular multimodal drug local injection was effective in reducing pain after hallux valgus surgery regardless of mixing with morphine.

A Virtual Reality System for the Cognitive and Behavioral Assessment of Schizophrenia (정신분열병 환자의 인지적/행동적 특성평가를 위한 가상현실시스템 구현)

  • Lee, Jang-Han;Cho, Won-Geun;Kim, Ho-Sung;Ku, Jung-Hun;Kim, Jae-Hun;Kim, Byoung-Nyun;Kim, Sun-I.
    • Science of Emotion and Sensibility
    • /
    • v.6 no.3
    • /
    • pp.55-62
    • /
    • 2003
  • Patients with schizophrenia have thinking disorders such as delusion or hallucination, because they have a deficit in the ability which to systematize and integrate information. therefore, they cannot integrate or systematize visual, auditory and tactile stimuli. In this study, we suggest a virtual reality system for the assessment of cognitive ability of schizophrenia patients, based on the brain multimodal integration model. The virtual reality system provides multimodal stimuli, such as visual and auditory stimuli, to the patient, and can evaluate the patient's multimodal integration and working memory integration abilities by making the patient interpret and react to multimodal stimuli, which must be remembered for a given period of time. the clinical study showed that the virtual reality program developed is comparable to those of the WCST and the SPM.

  • PDF

A Study on Concealed Damage through Car-Ferry International Multimodal Transport between Korea and Japan (한일간 카-훼리 국제복합운송에 따른 부명손해(不明損害)에 관한 연구)

  • Park, Sang-Kab;Kim, Jung-Ho
    • Journal of Navigation and Port Research
    • /
    • v.35 no.6
    • /
    • pp.523-531
    • /
    • 2011
  • The recent increase in international car-ferry lines between Korea and Japan as well as China brings needs for proper transportations of special cargo, such as machinery and luxury yacht, etc. International multimodal transport, especially international car ferry through truck-sea-truck system enables to fulfill shippers' needs for "Door to Door Service", of such special goods. However, this international multimodal transport of bulk cargo will cause a possible claim for concealed damages during such transportation. For this reason, this study aims to examine the liability system of the multimodal transport operator as well as to investigate liability for concealed damages theoretically and finally to seek proper measures for them. Futhermore, this paper intends to verify the claims for concealed damages to further the international multimodal transport by car ferries between Korea and Japan.

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.

Ultrasound-optical imaging-based multimodal imaging technology for biomedical applications (바이오 응용을 위한 초음파 및 광학 기반 다중 모달 영상 기술)

  • Moon Hwan Lee;HeeYeon Park;Kyungsu Lee;Sewoong Kim;Jihun Kim;Jae Youn Hwang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.5
    • /
    • pp.429-440
    • /
    • 2023
  • This study explores recent research trends and potential applications of ultrasound optical imaging-based multimodal technology. Ultrasound imaging has been widely utilized in medical diagnostics due to its real-time capability and relative safety. However, the drawback of low resolution in ultrasound imaging has prompted active research on multimodal imaging techniques that combine ultrasound with other imaging modalities to enhance diagnostic accuracy. In particular, ultrasound optical imaging-based multimodal technology enables the utilization of each modality's advantages while compensating for their limitations, offering a means to improve the accuracy of the diagnosis. Various forms of multimodal imaging techniques have been proposed, including the fusion of optical coherence tomography, photoacoustic, fluorescence, fluorescence lifetime, and spectral technology with ultrasound. This study investigates recent research trends in ultrasound optical imaging-based multimodal technology, and its potential applications are demonstrated in the biomedical field. The ultrasound optical imaging-based multimodal technology provides insights into the progress of integrating ultrasound and optical technologies, laying the foundation for novel approaches to enhance diagnostic accuracy in the biomedical domain.