• Title/Summary/Keyword: AI Modeling

Search Result 230, Processing Time 0.027 seconds

Control-Path Driven Process-Group Discovery Framework and its Experimental Validation for Process Mining and Reengineering (프로세스 마이닝과 리엔지니어링을 위한 제어경로 기반 프로세스 그룹 발견 프레임워크와 실험적 검증)

  • Thanh Hai Nguyen;Kwanghoon Pio Kim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.51-66
    • /
    • 2023
  • In this paper, we propose a new type of process discovery framework, which is named as control-path-driven process group discovery framework, to be used for process mining and process reengineering in supporting life-cycle management of business process models. In addition, we develop a process mining system based on the proposed framework and perform experimental verification through it. The process execution event logs applied to the experimental effectiveness and verification are specially defined as Process BIG-Logs, and we use it as the input datasets for the proposed discovery framework. As an eventual goal of this paper, we design and implement a control path-driven process group discovery algorithm and framework that is improved from the ρ-algorithm, and we try to verify the functional correctness of the proposed algorithm and framework by using the implemented system with a BIG-Log dataset. Note that all the process mining algorithm, framework, and system developed in this paper are based on the structural information control net process modeling methodology.

Prediction of ocean surface current: Research status, challenges, and opportunities. A review

  • Ittaka Aldini;Adhistya E. Permanasari;Risanuri Hidayat;Andri Ramdhan
    • Ocean Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.85-99
    • /
    • 2024
  • Ocean surface currents have an essential role in the Earth's climate system and significantly impact the marine ecosystem, weather patterns, and human activities. However, predicting ocean surface currents remains challenging due to the complexity and variability of the oceanic processes involved. This review article provides an overview of the current research status, challenges, and opportunities in the prediction of ocean surface currents. We discuss the various observational and modelling approaches used to study ocean surface currents, including satellite remote sensing, in situ measurements, and numerical models. We also highlight the major challenges facing the prediction of ocean surface currents, such as data assimilation, model-observation integration, and the representation of sub-grid scale processes. In this article, we suggest that future research should focus on developing advanced modeling techniques, such as machine learning, and the integration of multiple observational platforms to improve the accuracy and skill of ocean surface current predictions. We also emphasize the need to address the limitations of observing instruments, such as delays in receiving data, versioning errors, missing data, and undocumented data processing techniques. Improving data availability and quality will be essential for enhancing the accuracy of predictions. The future research should focus on developing methods for effective bias correction, a series of data preprocessing procedures, and utilizing combined models and xAI models to incorporate data from various sources. Advancements in predicting ocean surface currents will benefit various applications such as maritime operations, climate studies, and ecosystem management.

Water Digital Twin for High-tech Electronics Industrial Wastewater Treatment System (II): e-ASM Calibration, Effluent Prediction, Process selection, and Design (첨단 전자산업 폐수처리시설의 Water Digital Twin(II): e-ASM 모델 보정, 수질 예측, 공정 선택과 설계)

  • Heo, SungKu;Jeong, Chanhyeok;Lee, Nahui;Shim, Yerim;Woo, TaeYong;Kim, JeongIn;Yoo, ChangKyoo
    • Clean Technology
    • /
    • v.28 no.1
    • /
    • pp.79-93
    • /
    • 2022
  • In this study, an electronics industrial wastewater activated sludge model (e-ASM) to be used as a Water Digital Twin was calibrated based on real high-tech electronics industrial wastewater treatment measurements from lab-scale and pilot-scale reactors, and examined for its treatment performance, effluent quality prediction, and optimal process selection. For specialized modeling of a high-tech electronics industrial wastewater treatment system, the kinetic parameters of the e-ASM were identified by a sensitivity analysis and calibrated by the multiple response surface method (MRS). The calibrated e-ASM showed a high compatibility of more than 90% with the experimental data from the lab-scale and pilot-scale processes. Four electronics industrial wastewater treatment processes-MLE, A2/O, 4-stage MLE-MBR, and Bardenpo-MBR-were implemented with the proposed Water Digital Twin to compare their removal efficiencies according to various electronics industrial wastewater characteristics. Bardenpo-MBR stably removed more than 90% of the chemical oxygen demand (COD) and showed the highest nitrogen removal efficiency. Furthermore, a high concentration of 1,800 mg L-1 T MAH influent could be 98% removed when the HRT of the Bardenpho-MBR process was more than 3 days. Hence, it is expected that the e-ASM in this study can be used as a Water Digital Twin platform with high compatibility in a variety of situations, including plant optimization, Water AI, and the selection of best available technology (BAT) for a sustainable high-tech electronics industry.

Development of an Image Processing System for the Large Size High Resolution Satellite Images (대용량 고해상 위성영상처리 시스템 개발)

  • 김경옥;양영규;안충현
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.4
    • /
    • pp.376-391
    • /
    • 1998
  • Images from satellites will have 1 to 3 meter ground resolution and will be very useful for analyzing current status of earth surface. An image processing system named GeoWatch with more intelligent image processing algorithms has been designed and implemented to support the detailed analysis of the land surface using high-resolution satellite imagery. The GeoWatch is a valuable tool for satellite image processing such as digitizing, geometric correction using ground control points, interactive enhancement, various transforms, arithmetic operations, calculating vegetation indices. It can be used for investigating various facts such as the change detection, land cover classification, capacity estimation of the industrial complex, urban information extraction, etc. using more intelligent analysis method with a variety of visual techniques. The strong points of this system are flexible algorithm-save-method for efficient handling of large size images (e.g. full scenes), automatic menu generation and powerful visual programming environment. Most of the existing image processing systems use general graphic user interfaces. In this paper we adopted visual program language for remotely sensed image processing for its powerful programmability and ease of use. This system is an integrated raster/vector analysis system and equipped with many useful functions such as vector overlay, flight simulation, 3D display, and object modeling techniques, etc. In addition to the modules for image and digital signal processing, the system provides many other utilities such as a toolbox and an interactive image editor. This paper also presents several cases of image analysis methods with AI (Artificial Intelligent) technique and design concept for visual programming environment.

Development of a Dynamic Downscaling Method using a General Circulation Model (CCSM3) of the Regional Climate Model (MM5) (전지구 모델(CCSM3)을 이용한 지역기후 모델(MM5)의 역학적 상세화 기법 개발)

  • Choi, Jin-Young;Song, Chang-Geun;Lee, Jae-Bum;Hong, Sung-Chul;Bang, Cheol-Han
    • Journal of Climate Change Research
    • /
    • v.2 no.2
    • /
    • pp.79-91
    • /
    • 2011
  • In order to study interactions between climate change and air quality, a modeling system including the downscaling scheme has been developed in the integrated manner. This research focuses on the development of a downscaling method to utilize CCSM3 outputs as the initial and boundary conditions for the regional climate model, MM5. Horizontal/vertical interpolation was performed to convert from the latitude/longitude and hybrid-vertical coordinate for the CCSM3 model to the Lambert-Conformal Arakawa-B and sigma-vertical coordinate for the MM5 model. A variable diagnosis was made to link between different variables and their units of CCSM and MM5. To evaluate the dynamic downscaling performance of this study, spatial distributions were compared between outputs of CCSM/MM5 and NRA/MM5 and statistic analysis was conducted. Temperature and precipitation patterns of CCSM/MM5 in summer and winter showed a similar pattern with those of observation data in East Asia and the Korean Peninsula. In addition, statistical analysis presented that the agreement index (AI) is more than 0.9 and correlation coefficient about 0.9. Those results indicate that the dynamic downscaling system built in this study can be used for the research of interaction between climate change and air quality.

Comparative analysis of the digital circuit designing ability of ChatGPT (ChatGPT을 활용한 디지털회로 설계 능력에 대한 비교 분석)

  • Kihun Nam
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.967-971
    • /
    • 2023
  • Recently, a variety of AI-based platform services are available, and one of them is ChatGPT that processes a large quantity of data in the natural language and generates an answer after self-learning. ChatGPT can perform various tasks including software programming in the IT sector. Particularly, it may help generate a simple program and correct errors using C Language, which is a major programming language. Accordingly, it is expected that ChatGPT is capable of effectively using Verilog HDL, which is a hardware language created in C Language. Verilog HDL synthesis, however, is to generate imperative sentences in a logical circuit form and thus it needs to be verified whether the products are executed properly. In this paper, we aim to select small-scale logical circuits for ease of experimentation and to verify the results of circuits generated by ChatGPT and human-designed circuits. As to experimental environments, Xilinx ISE 14.7 was used for module modeling, and the xc3s1000 FPGA chip was used for module embodiment. Comparative analysis was performed on the use area and processing time of FPGA to compare the performance of ChatGPT products and Verilog HDL products.

User Experience Analysis and Management Based on Text Mining: A Smart Speaker Case (텍스트 마이닝 기반 사용자 경험 분석 및 관리: 스마트 스피커 사례)

  • Dine Yeon;Gayeon Park;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.22 no.2
    • /
    • pp.77-99
    • /
    • 2020
  • Smart speaker is a device that provides an interactive voice-based service that can search and use various information and contents such as music, calendar, weather, and merchandise using artificial intelligence. Since AI technology provides more sophisticated and optimized services to users by accumulating data, early smart speaker manufacturers tried to build a platform through aggressive marketing. However, the frequency of using smart speakers is less than once a month, accounting for more than one third of the total, and user satisfaction is only 49%. Accordingly, the necessity of strengthening the user experience of smart speakers has emerged in order to acquire a large number of users and to enable continuous use. Therefore, this study analyzes the user experience of the smart speaker and proposes a method for enhancing the user experience of the smart speaker. Based on the analysis results in two stages, we propose ways to enhance the user experience of smart speakers by model. The existing research on the user experience of the smart speaker was mainly conducted by survey and interview-based research, whereas this study collected the actual review data written by the user. Also, this study interpreted the analysis result based on the smart speaker user experience dimension. There is an academic significance in interpreting the text mining results by developing the smart speaker user experience dimension. Based on the results of this study, we can suggest strategies for enhancing the user experience to smart speaker manufacturers.

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Solid Phase Synthesis of N-(3-hydroxysulfonyl)-L-homoserine Lactone Derivatives and their Inhibitory Effects on Quorum Sensing Regulation in Vibrio harveyi (고체상 합성법에 의해 합성된 N-(3-hydroxysulfonyl)-L-homoserine Lactone 유사체들의 Vibrio harveyi 쿼럼 센싱에 대한 저해 효과)

  • Kim, Cheol-Jin;Park, Hyung-Yeon;Kim, Jae-Eun;Park, Hee-Jin;Lee, Bon-Su;Choi, Yu-Sang;Lee, Joon-Hee;Yoon, Je-Yong
    • Microbiology and Biotechnology Letters
    • /
    • v.37 no.3
    • /
    • pp.248-257
    • /
    • 2009
  • The inhibitors against Vibrio harveyi quorum sensing (QS) signaling were developed by modifying the molecular structure of the major signal, N-3-hydroxybutanoyl-L-homoserine lactone (3-OH-$C_4$-HSL). A series of structural derivatives, N-(3-hydroxysulfonyl)-L-homoserine lactones (HSHLs) were synthesized by the solid-phase organic synthesis method. The in vivo QS inhibition by these compounds was measured by a bioassay system using the V. harveyi bioluminescence, and all showed significant inhibitory effects. To analyze the interaction between these compounds and LuxN, a 3-OH-$C_4$-HSL receptor protein of V. harveyi, we tentatively determined the putative signal binding domain of LuxN based on the sequence homology with other acyl-HSL binding proteins, and predicted the partial 3-D structure of the putative signal binding domain of LuxN by using ORCHESTRA program, and further estimated the binding poses and energies (docking scores) of 3-OH-$C_4$-HSL and HSHLs within the domain. In comparison of the result from this modeling study with that of in vivo bioassay, we suggest that the in silica interpretation of the interaction between ligands and their receptor proteins can be a valuable way to develop better competitive inhibitors, especially in the case that the structural information of the protein is limited.