• Title/Summary/Keyword: Model extraction

Search Result 2,039, Processing Time 0.027 seconds

Development of a Web-based Presentation Attitude Correction Program Centered on Analyzing Facial Features of Videos through Coordinate Calculation (좌표계산을 통해 동영상의 안면 특징점 분석을 중심으로 한 웹 기반 발표 태도 교정 프로그램 개발)

  • Kwon, Kihyeon;An, Suho;Park, Chan Jung
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.10-21
    • /
    • 2022
  • In order to improve formal presentation attitudes such as presentation of job interviews and presentation of project results at the company, there are few automated methods other than observation by colleagues or professors. In previous studies, it was reported that the speaker's stable speech and gaze processing affect the delivery power in the presentation. Also, there are studies that show that proper feedback on one's presentation has the effect of increasing the presenter's ability to present. In this paper, considering the positive aspects of correction, we developed a program that intelligently corrects the wrong presentation habits and attitudes of college students through facial analysis of videos and analyzed the proposed program's performance. The proposed program was developed through web-based verification of the use of redundant words and facial recognition and textualization of the presentation contents. To this end, an artificial intelligence model for classification was developed, and after extracting the video object, facial feature points were recognized based on the coordinates. Then, using 4000 facial data, the performance of the algorithm in this paper was compared and analyzed with the case of facial recognition using a Teachable Machine. Use the program to help presenters by correcting their presentation attitude.

A Study on Follow-up Survey Methodology to Verify the Effectiveness of (<인생나눔교실> 사업의 효과 검증을 위한 추적 조사 방법론 연구 - 2017~2018년도 영상추적조사를 중심으로 -)

  • Lee, Dong Eun
    • Korean Association of Arts Management
    • /
    • no.53
    • /
    • pp.207-247
    • /
    • 2020
  • is a project for the senior generation with humanistic knowledge to become a mentor and communicate with them to present the wisdom and direction of life to the new generations of mentees based on various life experiences. has been expanding since 2015, starting with the pilot operation in 2014. In general, projects such as these are assessed to establish effectiveness indicators to verify effectiveness and to establish project management and development strategies. However, most of the evaluations have been conducted quantitatively and qualitatively based on the short-term duration of the project. Therefore, in the case of continuous projects such as , especially in the field of culture and arts where long-term effectiveness verification is required, the short-term evaluation is difficult to predict and judge the actual meaningful effects. In this regard, tried to examine the qualitative change of key participants in this project through the 2017 and 2018 image tracking survey. For this purpose, we adopted qualitative research methodology through interview video shooting, field shooting, and value coding as a research method suitable for the research subject. To analyze the results, first, the interview images were transcribed, keywords were extracted, value encoding works were matched with human psychological values, and the theoretical method was used to identify changes and to derive the meaning. In fact, despite the fact that the study conducted in this study was a follow-up survey, it remained a limitation that it analyzed the changed pattern in a rather short time of 2 years. However, this study systemized the specific methodology that researchers should conduct for follow-up and provided the flow of research at the present time when there is hardly a model for follow-up in the field of culture and arts education business in Korea as well as abroad. Significance can be derived from this point. In addition, it can be said that it has great significance in preparing the detailed system and case of comparative analysis methodology through value coding.

Technology Trends of Smart Abnormal Detection and Diagnosis System for Gas and Hydrogen Facilities (가스·수소 시설의 스마트 이상감지 및 진단 시스템 기술동향)

  • Park, Myeongnam;Kim, Byungkwon;Hong, Gi Hoon;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.26 no.4
    • /
    • pp.41-57
    • /
    • 2022
  • The global demand for carbon neutrality in response to climate change is in a situation where it is necessary to prepare countermeasures for carbon trade barriers for some countries, including Korea, which is classified as an export-led economic structure and greenhouse gas exporter. Therefore, digital transformation, which is one of the predictable ways for the carbon-neutral transition model to be applied, should be introduced early. By applying digital technology to industrial gas manufacturing facilities used in one of the major industries, high-tech manufacturing industry, and hydrogen gas facilities, which are emerging as eco-friendly energy, abnormal detection, and diagnosis services are provided with cloud-based predictive diagnosis monitoring technology including operating knowledge. Here are the trends. Small and medium-sized companies that are in the blind spot of carbon-neutral implementation by confirming the direction of abnormal diagnosis predictive monitoring through optimization, augmented reality technology, IoT and AI knowledge inference, etc., rather than simply monitoring real-time facility status It can be seen that it is possible to disseminate technologies such as consensus knowledge in the engineering domain and predictive diagnostic monitoring that match the economic feasibility and efficiency of the technology. It is hoped that it will be used as a way to seek countermeasures against carbon emission trade barriers based on the highest level of ICT technology.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Study of Optimal Lotion Manufacturing Process Containing Angelica gigas Nakai Extracts by Utilizing Experimental Design and Design Space Convergence Analysis (실험 설계와 디자인 스페이스 융합 분석을 통한 Angelica gigas Nakai 추출물을 함유한 로션 제조의 최적 공정 연구)

  • Pyo, Jae-Sung;Kim, Hyun-Jin;Yoon, Seon-hye;Park, Jae-Kyu;Kim, Kang-Min
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.3
    • /
    • pp.132-140
    • /
    • 2022
  • This study was conducted to identify the optimal lotion manufacturing conditions with decursin and decursinol angelate of Angelica gigas Nakai extraction. Lotion was confirmed that it had viscosity (5,208±112 cPs), assay (99.71±1.01%), and pH (5.62) for 3 months. The optimization of manufacturing conditions of mixing 4 for lotion formulation were made by 22+3 full factorial design. Mixing temperature (40-80℃) and mixing time (10-30 min) were used as independent variables with three responses(assay, pH, and weight variation) as critical quality attributes (CQAs). The model for assay and weight variation identified a proper fit having a determination coefficient of the regression equation (about 0.9) and a p-value less than 0.05. Estimated conditions for the optimal manufacturing process of lotion were 61.93℃ in mixing temperature and 15.85 min in mixing time. Predicted values at the mixing temperature (60℃) and mixing time (20 min) were 100.69% of assay, 5.57 of pH, and 98.07% of weight variation. In the verification of the actual measurement the obtained values showed 100.29±0.98% of assay, 5.57±0.02 of pH, and 98.27±0.89% of weight variation, respectively, in good agreement with predicted values.

Short-Term Precipitation Forecasting based on Deep Neural Network with Synthetic Weather Radar Data (기상레이더 강수 합성데이터를 활용한 심층신경망 기반 초단기 강수예측 기술 연구)

  • An, Sojung;Choi, Youn;Son, MyoungJae;Kim, Kwang-Ho;Jung, Sung-Hwa;Park, Young-Youn
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.43-45
    • /
    • 2021
  • The short-term quantitative precipitation prediction (QPF) system is important socially and economically to prevent damage from severe weather. Recently, many studies for short-term QPF model applying the Deep Neural Network (DNN) has been conducted. These studies require the sophisticated pre-processing because the mistreatment of various and vast meteorological data sets leads to lower performance of QPF. Especially, for more accurate prediction of the non-linear trends in precipitation, the dataset needs to be carefully handled based on the physical and dynamical understands the data. Thereby, this paper proposes the following approaches: i) refining and combining major factors (weather radar, terrain, air temperature, and so on) related to precipitation development in order to construct training data for pattern analysis of precipitation; ii) producing predicted precipitation fields based on Convolutional with ConvLSTM. The proposed algorithm was evaluated by rainfall events in 2020. It is outperformed in the magnitude and strength of precipitation, and clearly predicted non-linear pattern of precipitation. The algorithm can be useful as a forecasting tool for preventing severe weather.

  • PDF

Extraction of Snowmelt Parameters using NOAA AVHRR and GIS Technique for 7 Major Dam Watersheds in South Korea (NOAA AVHRR 영상 및 GIS 기법을 이용한 국내 주요 7개 댐 유역의 융설 매개변수 추출)

  • Shin, Hyung Jin;Kim, Seong Joon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2B
    • /
    • pp.177-185
    • /
    • 2008
  • Accurate monitoring of snow cover is a key component for studying climate and global as well as for daily weather forecasting and snowmelt runoff modelling. The few observed data related to snowmelt was the major cause of difficulty in extracting snowmelt factors such as snow cover area, snow depth and depletion curve. Remote sensing technology is very effective to observe a wide area. Although many researchers have used remote sensing for snow observation, there were a few discussions on the characteristics of spatial and temporal variation. Snow cover maps were derived from NOAA AVHRR images for the winter seasons from 1997 to 2006. Distributed snow depth was mapped by overlapping between snow cover maps and interpolated snowfall maps from 69 meteorological observation stations. Model parameters (Snow Cover Area: SCA, snow depth, Snow cover Depletion Curve: SDC) were built for 7 major watersheds in South Korea. The decrease pattern of SCA for time (day) was expressed as exponentially decay function, and the determination coefficient was ranged from 0.46 to 0.88. The SCA decreased 70% to 100% from the maximum SCA when 10 days passed.

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

Prediction of Patient Management in COVID-19 Using Deep Learning-Based Fully Automated Extraction of Cardiothoracic CT Metrics and Laboratory Findings

  • Thomas Weikert;Saikiran Rapaka;Sasa Grbic;Thomas Re;Shikha Chaganti;David J. Winkel;Constantin Anastasopoulos;Tilo Niemann;Benedikt J. Wiggli;Jens Bremerich;Raphael Twerenbold;Gregor Sommer;Dorin Comaniciu;Alexander W. Sauter
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.994-1004
    • /
    • 2021
  • Objective: To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. Materials and Methods: All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. Results: While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88). Conclusion: Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.

Cavitation signal detection based on time-series signal statistics (시계열 신호 통계량 기반 캐비테이션 신호 탐지)

  • Haesang Yang;Ha-Min Choi;Sock-Kyu Lee;Woojae Seong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.4
    • /
    • pp.400-405
    • /
    • 2024
  • When cavitation noise occurs in ship propellers, the level of underwater radiated noise abruptly increases, which can be a critical threat factor as it increases the probability of detection, particularly in the case of naval vessels. Therefore, accurately and promptly assessing cavitation signals is crucial for improving the survivability of submarines. Traditionally, techniques for determining cavitation occurrence have mainly relied on assessing acoustic/vibration levels measured by sensors above a certain threshold, or using the Detection of Envelop Modulation On Noise (DEMON) method. However, technologies related to this rely on a physical understanding of cavitation phenomena and subjective criteria based on user experience, involving multiple procedures, thus necessitating the development of techniques for early automatic recognition of cavitation signals. In this paper, we propose an algorithm that automatically detects cavitation occurrence based on simple statistical features reflecting cavitation characteristics extracted from acoustic signals measured by sensors attached to the hull. The performance of the proposed technique is evaluated depending on the number of sensors and model test conditions. It was confirmed that by sufficiently training the characteristics of cavitation reflected in signals measured by a single sensor, the occurrence of cavitation signals can be determined.