• Title/Summary/Keyword: Multimodal model

Search Result 132, Processing Time 0.017 seconds

A Bayesian Approach to Gumbel Mixture Distribution for the Estimation of Parameter and its use to the Rainfall Frequency Analysis (Bayesian 기법을 이용한 혼합 Gumbel 분포 매개변수 추정 및 강우빈도해석 기법 개발)

  • Choi, Hong-Geun;Uranchimeg, Sumiya;Kim, Yong-Tak;Kwon, Hyun-Han
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.2
    • /
    • pp.249-259
    • /
    • 2018
  • More than half of annual rainfall occurs in summer season in Korea due to its climate condition and geographical location. A frequency analysis is mostly adopted for designing hydraulic structure under the such concentrated rainfall condition. Among the various distributions, univariate Gumbel distribution has been routinely used for rainfall frequency analysis in Korea. However, the distributional changes in extreme rainfall have been globally observed including Korea. More specifically, the univariate Gumbel distribution based rainfall frequency analysis is often fail to describe multimodal behaviors which are mainly influenced by distinct climate conditions during the wet season. In this context, we purposed a Gumbel mixture distribution based rainfall frequency analysis with a Bayesian framework, and further the results were compared to that of the univariate. It was found that the proposed model showed better performance in describing underlying distributions, leading to the lower Bayesian information criterion (BIC) values. The mixed Gumbel distribution was more robust for describing the upper tail of the distribution which playes a crucial role in estimating more reliable estimates of design rainfall uncertainty occurred by peak of upper tail than single Gumbel distribution. Therefore, it can be concluded that the mixed Gumbel distribution is more compatible for extreme frequency analysis rainfall data with two or more peaks on its distribution.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.