• Title/Summary/Keyword: multimodal

Search Result 646, Processing Time 0.023 seconds

Single-Cell Molecular Barcoding to Decode Multimodal Information Defining Cell States

  • Ik Soo Kim
    • Molecules and Cells
    • /
    • v.46 no.2
    • /
    • pp.74-85
    • /
    • 2023
  • Single-cell research has provided a breakthrough in biology to understand heterogeneous cell groups, such as tissues and organs, in development and disease. Molecular barcoding and subsequent sequencing technology insert a single-cell barcode into isolated single cells, allowing separation cell by cell. Given that multimodal information from a cell defines precise cellular states, recent technical advances in methods focus on simultaneously extracting multimodal data recorded in different biological materials (DNA, RNA, protein, etc.). This review summarizes recently developed single-cell multiomics approaches regarding genome, epigenome, and protein profiles with the transcriptome. In particular, we focus on how to anchor or tag molecules from a cell, improve throughputs with sample multiplexing, and record lineages, and we further discuss the future developments of the technology.

A Model for Evaluating the Connectivity of Multimodal Transit Networks (복합수단 대중교통 네트워크의 연계성 평가 모형)

  • Park, Jun-Sik;Gang, Seong-Cheol
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.3
    • /
    • pp.85-98
    • /
    • 2010
  • As transit networks are becoming more multimodal, the concept of connectivity of transit networks becomes important. This study aims to develop a quantitative model for measuring the connectivity of multimodal transit networks. To that end, we select, as evaluation measures of a transit line, its length, capacity, and speed. We then define the connecting power of a transit line as the product of those measures. The degree centrality of a node, which is a widely used centrality measure in social network analysis, is employed with appropriate modifications suited for transit networks. Using the degree centrality of a transit stop and the connecting powers of transit lines serving the transit stop, we develop an index quantifying the level of connectivity of the transit stop. From the connectivity indexes of transit stops, we derive the connectivity index of a transit line as well as an area of a multimodal transit network. In addition, we present a method to evaluate the connectivity of a transfer center using the connectivity indexes of transit stops and passenger acceptance rate functions. A case study shows that the connectivity evaluation model developed in this study takes well into consideration characteristics of multimodal transit networks, adequately measures the connectivity of transit stops, lines, and areas, and furthermore can be used in determining the level of service of transfer centers.

Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos (멀티모달 개념계층모델을 이용한 만화비디오 컨텐츠 학습을 통한 등장인물 기반 비디오 자막 생성)

  • Kim, Kyung-Min;Ha, Jung-Woo;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.451-458
    • /
    • 2015
  • Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

A Parallel Genetic Algorithms with Diversity Controlled Migration and its Applicability to Multimodal Function Optimization

  • YAMAMOTO, Fujio;ARAKI, Tomoyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.629-633
    • /
    • 1998
  • Proposed here is a parallel genetic algorithm accompanied with intermittent migration among subpopulations. It is intended to maintain diversity in the population for a long period . This method was applied to finding out the global maximum of some multimodal functions for which no other methods seem to be useful . Preferable results and their detailed analysis are also presented.

  • PDF

An Enhanced Genetic Algorithm for Optimization of Multimodal Function (다봉성 함수의 최적화를 위한 향상된 유전알고리듬의 제안)

  • 김영찬;양보석
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.05a
    • /
    • pp.241-244
    • /
    • 2000
  • The optimization method based on an enhanced genetic algorithms is proposed for multimodal function optimization in this paper This method is consisted of two main steps. The first step is global search step using the genetic algorithm(GA) and function assurance criterion(FAC). The belonging of an population to initial solution group is decided according to the FAC. The second step is to decide resemblance between individuals and research optimum solutions by single point method in reconstructive research space. Two numerical examples are also presented in this paper to comparing with conventional methods.

  • PDF

Leveraging Multimodal Supports using Mobile Phones for Obesity Management in Elementary-School Children: Program Providers' Perspective from a Qualitative Study (모바일폰을 이용한 초등학생 비만관리 복합지원의 잠재적 이로움 : 프로그램 제공자 측면에 대한 질적 연구)

  • Park, Mi-Young;Shim, Jae Eun;Kim, Kirang;Hwang, Ji-Yun
    • Korean Journal of Community Nutrition
    • /
    • v.22 no.3
    • /
    • pp.238-247
    • /
    • 2017
  • Objectives: This study was conducted to investigate providers' perspectives on current challenges in implementing a program for prevention and management of childhood obesity and adoption of mobile phone as a potential solution of leveraging multimodal delivery and support in a school setting. Methods: The qualitative data were collected through face-to-face in-depth interviews with 23 elementary-school teachers, 6 pediatricians, and 6 dieticians from community health centers and analyzed using a qualitative research methodology. Results: Current challenges and potential solutions of obesity-prevention and -management program for obesity program for elementary school children were deduced as two themes each. Lack of tailored intervention due to limited recipient motivation, lack of individualized behavioral intervention, and different environmental conditions can be solvable by mobile technology-based personalized intervention which brings about interactive recipient participation, customized behavioral intervention, and ubiquitous accessibility. Lack of sustainable management due to stigmatization, limited interactions between program providers and inconsistent administrative support can be handled by multimodal support based on school setting using mobile platform providing education of health promoting behaviors toward larger scale and interactive networking between program participants, and minimizing administrative burden. Conclusions: Adoption of mobile-based health management program may overcome current limitations of child obesity program such as lack of tailored intervention and sustainable management via personalized intervention and multimodal supports although some concerns such as increased screen time need to be carefully considered in a further study.

A Full Body Gumdo Game with an Intelligent Cyber Fencer using Multi-modal(3D Vision and Speech) Interface (멀티모달 인터페이스(3차원 시각과 음성 )를 이용한 지능적 가상검객과의 전신 검도게임)

  • 윤정원;김세환;류제하;우운택
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.420-430
    • /
    • 2003
  • This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. First, the multimodal Interface with 3D vision and speech allows a user to move around and to shout without distracting the user. Second, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback by a big screen and sound effects helps a user experience an immersive interaction. The proposed system thus provides the user with an immersive Gumdo experience with the whole body movement. The suggested system can be applied to various applications such as education, exercise, art performance, etc.

Freight Demand Analysis for Multimodal Shipments (복합수단운송을 고려한 화물통행수요분석 방안)

  • Hong, Da-Hee;Park, Min-Choul;Lee, Jung-Yub;Hahn, Jin-Seok;Kang, Jae-Won
    • Journal of Korean Society of Transportation
    • /
    • v.30 no.4
    • /
    • pp.85-94
    • /
    • 2012
  • Modern freight transport pursues not only the reduction of logistic costs but also aims at green logistics and efficient shipments. In order to accomplish these goals, various policies regarding the multimodal shipment and stopover to logistic facilities have widely been made. Such situation requires changes in existing methods for analyzing freight demand. However, the reality is that a reliable freight demand forecast is limited, since in the transport research field there is no robust freight demand model that can accommodate transshipments at logistic facilities. This study suggested a novel method to analyze freight demand, which can consider transshipments in multi-modal networks. Also, the applicability of this method was discussed through an example test.

Outcomes of the Multimodal Treatment of Malignant Pleural Mesiothelioma: The Role of Surgery

  • Na, Bub-Se;Kim, Ji Seong;Hyun, Kwanyong;Park, In Kyu;Kang, Chang Hyun;Kim, Young Tae
    • Journal of Chest Surgery
    • /
    • v.51 no.1
    • /
    • pp.35-40
    • /
    • 2018
  • Background: The treatment of malignant pleural mesothelioma (MPM) is challenging, and multimodal treatment including surgery is recommended; however, the role of surgery is debated. The treatment outcomes of MPM in Korea have not been reported. We analyzed the outcomes of MPM in the context of multimodal treatment, including surgery. Methods: The records of 29 patients with pathologically proven MPM from April 1998 to July 2015 were retrospectively reviewed. The treatment outcomes of the surgery and non-surgery groups were compared. Results: The overall median survival time was 10.6 months, and the overall 3-year survival rate was 25%. No postoperative 30-day or in-hospital mortality occurred in the surgery group. Postoperative complications included tachyarrhythmia (n=4), pulmonary thromboembolism (n=1), pneumonia (n=1), chylothorax (n=1), and wound complications (n=3). The treatment outcomes between the surgery and non-surgery groups were not significantly different (3-year survival rate: 31.3% vs. 16.7%, respectively; p=0.47). In a subgroup analysis, there was no significant difference in the treatment outcomes between the extrapleural pneumonectomy group and the non-surgery group (3-year survival rate: 45.5% vs. 16.7%, respectively; p=0.23). Conclusion: Multimodal treatment incorporating surgery did not show better outcomes than non-surgical treatment. A nationwide multicenter data registry and prospective randomized controlled studies are necessary to optimize the treatment of MPM.