• Title/Summary/Keyword: Deep learning based control

Search Result 237, Processing Time 0.028 seconds

Real-time simulation and control of indoor air exchange volume based on Digital Twin Platform

  • Chia-Ying Lin;I-Chen Wu
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.637-644
    • /
    • 2024
  • Building Information Modeling (BIM) technology has been widely adopted in the construction industry. However, a challenge encountered in the operational phase is that building object data cannot be updated in real time. The concept of Digital Twin is to digitally simulate objects, environments, and processes in the real world, employing real-time monitoring, simulation, and prediction to achieve dynamic integration between the virtual and the real. This research considers an example related to indoor air quality for realizing the concept of Digital Twin and solving the problem that the digital twin platform cannot be updated in real time. In indoor air quality monitoring, the ventilation rate and the presence of occupants significantly affects carbon dioxide concentration. This study uses the indoor carbon dioxide concentration recommended by the Taiwan Environmental Protection Agency as a reference standard for air quality measurement, providing a solution to the aforementioned challenges. The research develops a digital twin platform using Unity, which seamlessly integrates BIM and IoT technology to realize and synchronize virtual and real environments. Deep learning techniques are applied to process camera images and real-time monitoring data from IoT sensors. The camera images are utilized to detect the entry and exit of individuals indoors, while monitoring data to understand environmental conditions. These data serve as a basis for calculating carbon dioxide concentration and determining the optimal indoor air exchange volume. This platform not only simulates the air quality of the environment but also aids space managers in decision-making to optimize indoor environments. It enables real-time monitoring and contributes to energy conservation.

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.

Contextual-Bandit Based Log Level Setting for Video Wall Controller (Contextual Bandit에 기반한 비디오 월 컨트롤러의 로그레벨)

  • Kim, Sung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.633-635
    • /
    • 2022
  • If an error occurs during operation of the video wall controller, the control system creates a log file and records the log. In order to minimize the load on the system due to log recording, the log level is set so that the log is not recorded as much as possible under normal operating conditions. When an error occurs, detailed logs are recorded by changing the log level to analyze and respond to the cause of the error. So work efficiency is reduced, and operator intervention is inevitable to change the log level. In this paper, we propose a model that automatically sets the log level according to the operating situation using Contextual Bandit.

  • PDF

A Trend Analysis of ECG Classification based on Deep Learning (딥러닝기반 심전도 분류의 국내외 동향분석)

  • Byeon, Yeong-Hyeon;Kwak, Keun-Chang
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.246-249
    • /
    • 2019
  • 심전도는 심장운동으로 미세하게 변하는 심장의 전위차를 신체외부의 피부에서 측정한 것으로 최근 의료, 금융, 보안, 오락 등 서비스에서 기존의 생체신호시스템의 대안으로 많은 연구가 되고 있다. 기존 서비스로서 개인인식, 개인인증, 부정맥인식, 행동인식, 심방세동 검출 등은 근본적으로 심전도를 분류하는 기술이고 또한 최근 딥러닝이 여러 분야에서 두드러진 성능들이 보고되었기 때문에 딥러닝을 이용한 심전도 분석도 많은 연구가 되고 있다. 따라서 본 논문은 딥러닝기반 심전도 분류의 국내외 동향분석을 한다.

Development of Automative Loudness Control Technique based on Audio Contents Analysis using Deep Learning (딥러닝을 이용한 오디오 콘텐츠 분석 기반의 자동 음량 제어 기술 개발)

  • Lee, Young Han;Cho, Choongsang;Kim, Je Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.42-43
    • /
    • 2018
  • 국내 디지털 방송 프로그램은 2016년 방송법 개정 이후, ITU-R / EBU에서 제안한 측정 방식을 활용하여 채널 및 프로그램 간의 음량을 맞추어 제공되고 있다. 일반적으로 뉴스나 중계와 같이 실시간으로 음량을 맞춰야 하는 분야를 제외하고는 평균 음량을 규정에 맞춰 송출하고 있다. 본 논문에서는 일괄적으로 평균 음량을 맞출 경우 발생하는 저음량의 명료도를 높이기 위한 기술을 제안한다. 즉, 방송 음량을 조절하는 기술 중의 하나로 오디오 콘텐츠를 분석하여 구간별 음량 조절 정도를 달리함으로써 저음량에서의 음성은 상대적으로 높은 음량을 가지고 배경음악 등을 상대적으로 낮음 음량을 가지도록 생성함으로써 명료도를 높이는 방식을 제안한다. 제안한 방식의 성능을 확인하기 위해 오디오 콘텐츠 분석 정확도 측정과 오디오 파형 분석을 실시하였으며 이를 통해 기존의 음량 제어 기술과 비교하여 음성 구간에 대해 음량을 증폭시키는 것을 확인하였다.

  • PDF

Audio Contents Classification based on Deep learning for Automatic Loudness Control (오디오 음량 자동 제어를 위한 콘텐츠 분류 기술 개발)

  • Lee, Young Han;Cho, Choongsang;Kim, Je Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.320-321
    • /
    • 2018
  • 오디오 음량을 자동으로 제어하는데 있어 음성이 있는 구간에 대해서 음량이 급격히 줄어드는 것을 막기 위해 콘텐츠에 대한 분석이 필요하다. 본 논문에서는 방송 음량을 조절을 위한 세부 기술로 딥러닝 기반의 콘텐츠 분류 기술을 제안한다. 이를 위해 오디오를 무음, 음성, 음성/오디오 혼합, 오디오의 4개로 정의하고 이를 처리하기 위한 mel-spectrogram을 이용하여 2D CNN 기반의 분류기를 정의하였다. 또한 학습을 위해 방송 오디오 데이터를 활용하여 학습/검증 데이터 셋을 구축하였다. 제안한 방식의 성능을 확인하기 위해 검증 데이터셋을 활용하여 정확도를 측정하였으며 약 81.1%의 정확도를 가지는 것을 확인하였다.

  • PDF

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

Fat Client-Based Abstraction Model of Unstructured Data for Context-Aware Service in Edge Computing Environment (에지 컴퓨팅 환경에서의 상황인지 서비스를 위한 팻 클라이언트 기반 비정형 데이터 추상화 방법)

  • Kim, Do Hyung;Mun, Jong Hyeok;Park, Yoo Sang;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.59-70
    • /
    • 2021
  • With the recent advancements in the Internet of Things, context-aware system that provides customized services become important to consider. The existing context-aware systems analyze data generated around the user and abstract the context information that expresses the state of situations. However, these datasets is mostly unstructured and have difficulty in processing with simple approaches. Therefore, providing context-aware services using the datasets should be managed in simplified method. One of examples that should be considered as the unstructured datasets is a deep learning application. Processes in deep learning applications have a strong coupling in a way of abstracting dataset from the acquisition to analysis phases, it has less flexible when the target analysis model or applications are modified in functional scalability. Therefore, an abstraction model that separates the phases and process the unstructured dataset for analysis is proposed. The proposed abstraction utilizes a description name Analysis Model Description Language(AMDL) to deploy the analysis phases by each fat client is a specifically designed instance for resource-oriented tasks in edge computing environments how to handle different analysis applications and its factors using the AMDL and Fat client profiles. The experiment shows functional scalability through examples of AMDL and Fat client profiles targeting a vehicle image recognition model for vehicle access control notification service, and conducts process-by-process monitoring for collection-preprocessing-analysis of unstructured data.

Injection Process Yield Improvement Methodology Based on eXplainable Artificial Intelligence (XAI) Algorithm (XAI(eXplainable Artificial Intelligence) 알고리즘 기반 사출 공정 수율 개선 방법론)

  • Ji-Soo Hong;Yong-Min Hong;Seung-Yong Oh;Tae-Ho Kang;Hyeon-Jeong Lee;Sung-Woo Kang
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.1
    • /
    • pp.55-65
    • /
    • 2023
  • Purpose: The purpose of this study is to propose an optimization process to improve product yield in the process using process data. Recently, research for low-cost and high-efficiency production in the manufacturing process using machine learning or deep learning has continued. Therefore, this study derives major variables that affect product defects in the manufacturing process using eXplainable Artificial Intelligence(XAI) method. After that, the optimal range of the variables is presented to propose a methodology for improving product yield. Methods: This study is conducted using the injection molding machine AI dataset released on the Korea AI Manufacturing Platform(KAMP) organized by KAIST. Using the XAI-based SHAP method, major variables affecting product defects are extracted from each process data. XGBoost and LightGBM were used as learning algorithms, 5-6 variables are extracted as the main process variables for the injection process. Subsequently, the optimal control range of each process variable is presented using the ICE method. Finally, the product yield improvement methodology of this study is proposed through a validation process using Test Data. Results: The results of this study are as follows. In the injection process data, it was confirmed that XGBoost had an improvement defect rate of 0.21% and LightGBM had an improvement defect rate of 0.29%, which were improved by 0.79%p and 0.71%p, respectively, compared to the existing defect rate of 1.00%. Conclusion: This study is a case study. A research methodology was proposed in the injection process, and it was confirmed that the product yield was improved through verification.

Development of a New Prediction Alarm Algorithm Applicable to Pumped Storage Power Plant (양수발전 설비에 적용 가능한 새로운 고장 예측경보 알고리즘 개발)

  • Dae-Yeon Lee;Soo-Yong Park;Dong-Hyung Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.133-142
    • /
    • 2023
  • The large process plant is currently implementing predictive maintenance technology to transition from the traditional Time-Based Maintenance (TBM) approach to the Condition-Based Maintenance (CBM) approach in order to improve equipment maintenance and productivity. The traditional techniques for predictive maintenance involved managing upper/lower thresholds (Set-Point) of equipment signals or identifying anomalies through control charts. Recently, with the development of techniques for big analysis, machine learning-based AAKR (Auto-Associative Kernel Regression) and deep learning-based VAE (Variation Auto-Encoder) techniques are being actively applied for predictive maintenance. However, this predictive maintenance techniques is only effective during steady-state operation of plant equipment, and it is difficult to apply them during start-up and shutdown periods when rises or falls. In addition, unlike processes such as nuclear and thermal power plants, which operate for hundreds of days after a single start-up, because the pumped power plant involves repeated start-ups and shutdowns 4-5 times a day, it is needed the prediction and alarm algorithm suitable for its characteristics. In this study, we aim to propose an approach to apply the optimal predictive alarm algorithm that is suitable for the characteristics of Pumped Storage Power Plant(PSPP) facilities to the system by analyzing the predictive maintenance techniques used in existing nuclear and coal power plants.