• Title/Summary/Keyword: Processing

Search Result 69,802, Processing Time 0.09 seconds

A Study on Health Impact Assessment and Emissions Reduction System Using AERMOD (AERMOD를 활용한 건강위해성평가 및 배출저감제도에 관한 연구)

  • Seong-Su Park;Duk-Han Kim;Hong-Kwan Kim;Young-Woo Chon
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.93-105
    • /
    • 2024
  • Purpose: This study aims to quantitatively determine the impact on nearby risidents by selecting the amount of chemicals emitted from the workplace among the substances subject to the chemical emission plan and predicting the concentration with the atmospheric diffusion program. Method: The selection of research materials considered half-life, toxicity, and the presence or absence of available monitoring station data. The areas discharged from the materials to be studied were selected as the areas to be studied, and four areas with floating populations were selected to evaluate health risks. Result: AERMOD was executed after conducting terrain and meteorological processing to obtain predicted concentrations. The health hazard assessment results indicated that only dichloromethane exceeded the threshold for children, while tetrachloroethylene and chloroform appeared at levels that cannot be ignored for both children and adults. Conclusion: Currently, in the domestic context, health hazard assessments are conducted based on the regulations outlined in the "Environmental Health Act" where if the hazard index exceeds a certain threshold, it is considered to pose a health risk. The anticipated expansion of the list of substances subject to the chemical discharge plan to 415 types by 2030 suggests the need for efficient management within workplaces. In instances where the hazard index surpasses the threshold in health hazard assessments, it is judged that effective chemical management can be achieved by prioritizing based on considerations of background concentration and predicted concentration through atmospheric dispersion modeling.

Enhancing Empathic Reasoning of Large Language Models Based on Psychotherapy Models for AI-assisted Social Support (인공지능 기반 사회적 지지를 위한 대형언어모형의 공감적 추론 향상: 심리치료 모형을 중심으로)

  • Yoon Kyung Lee;Inju Lee;Minjung Shin;Seoyeon Bae;Sowon Hahn
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.23-48
    • /
    • 2024
  • Building human-aligned artificial intelligence (AI) for social support remains challenging despite the advancement of Large Language Models. We present a novel method, the Chain of Empathy (CoE) prompting, that utilizes insights from psychotherapy to induce LLMs to reason about human emotional states. This method is inspired by various psychotherapy approaches-Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy (RT)-each leading to different patterns of interpreting clients' mental states. LLMs without CoE reasoning generated predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathic responses aligned with each psychotherapy model's different reasoning patterns. For empathic expression classification, the CBT-based CoE resulted in the most balanced classification of empathic expression labels and the text generation of empathic responses. However, regarding emotion reasoning, other approaches like DBT and PCT showed higher performance in emotion reaction classification. We further conducted qualitative analysis and alignment scoring of each prompt-generated output. The findings underscore the importance of understanding the emotional context and how it affects human-AI communication. Our research contributes to understanding how psychotherapy models can be incorporated into LLMs, facilitating the development of context-aware, safe, and empathically responsive AI.

The Contact and Parallel Analysis of Smoothed Particle Hydrodynamics (SPH) Using Polyhedral Domain Decomposition (다면체영역분할을 이용한 SPH의 충돌 및 병렬해석)

  • Moonho Tak
    • Journal of the Korean GEO-environmental Society
    • /
    • v.25 no.4
    • /
    • pp.21-28
    • /
    • 2024
  • In this study, a polyhedral domain decomposition method for Smoothed Particle Hydrodynamics (SPH) analysis is introduced. SPH which is one of meshless methods is a numerical analysis method for fluid flow simulation. It can be useful for analyzing fluidic soil or fluid-structure interaction problems. SPH is a particle-based method, where increased particle count generally improves accuracy but diminishes numerical efficiency. To enhance numerical efficiency, parallel processing algorithms are commonly employed with the Cartesian coordinate-based domain decomposition method. However, for parallel analysis of complex geometric shapes or fluidic problems under dynamic boundary conditions, the Cartesian coordinate-based domain decomposition method may not be suitable. The introduced polyhedral domain decomposition technique offers advantages in enhancing parallel efficiency in such problems. It allows partitioning into various forms of 3D polyhedral elements to better fit the problem. Physical properties of SPH particles are calculated using information from neighboring particles within the smoothing length. Methods for sharing particle information physically separable at partitioning and sharing information at cross-points where parallel efficiency might diminish are presented. Through numerical analysis examples, the proposed method's parallel efficiency approached 95% for up to 12 cores. However, as the number of cores is increased, parallel efficiency is decreased due to increased information sharing among cores.

Comparative analysis of wavelet transform and machine learning approaches for noise reduction in water level data (웨이블릿 변환과 기계 학습 접근법을 이용한 수위 데이터의 노이즈 제거 비교 분석)

  • Hwang, Yukwan;Lim, Kyoung Jae;Kim, Jonggun;Shin, Minhwan;Park, Youn Shik;Shin, Yongchul;Ji, Bongjun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.209-223
    • /
    • 2024
  • In the context of the fourth industrial revolution, data-driven decision-making has increasingly become pivotal. However, the integrity of data analysis is compromised if data quality is not adequately ensured, potentially leading to biased interpretations. This is particularly critical for water level data, essential for water resource management, which often encounters quality issues such as missing values, spikes, and noise. This study addresses the challenge of noise-induced data quality deterioration, which complicates trend analysis and may produce anomalous outliers. To mitigate this issue, we propose a noise removal strategy employing Wavelet Transform, a technique renowned for its efficacy in signal processing and noise elimination. The advantage of Wavelet Transform lies in its operational efficiency - it reduces both time and costs as it obviates the need for acquiring the true values of collected data. This study conducted a comparative performance evaluation between our Wavelet Transform-based approach and the Denoising Autoencoder, a prominent machine learning method for noise reduction.. The findings demonstrate that the Coiflets wavelet function outperforms the Denoising Autoencoder across various metrics, including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE). The superiority of the Coiflets function suggests that selecting an appropriate wavelet function tailored to the specific application environment can effectively address data quality issues caused by noise. This study underscores the potential of Wavelet Transform as a robust tool for enhancing the quality of water level data, thereby contributing to the reliability of water resource management decisions.

An Exploratory Study of the Determinants of Global Sourcing Intention in Korean Clothing Sewing Industry: Focusing on Women's Knit Wear Production (국내 의류봉제 산업의 글로벌소싱 의향 고려요인 연구: 여성니트복종(women's knit wear) 생산을 중심으로)

  • Dabin Yoo;Sunwook Chung
    • Asia-Pacific Journal of Business
    • /
    • v.14 no.4
    • /
    • pp.67-85
    • /
    • 2023
  • Purpose - This study seeks to investigate the determinants of global sourcing intention in clothing sewing industry, in particular with its focus on women's knit wear production. Design/methodology/approach - This study collected a unique set of qualitative data through 31 in-depth interviews with fashion brands, promotion agencies, and sewing factories between July 2023 and October 2023. In addition, it analyzed the dataset using the MAXQDA to complement the research findings. Findings - We have two findings. First, the interviewees commonly mentioned the following factors as reasons for considering global sourcing: the human factors(aging of skilled technicians and labor shortages), the financial factors(gap in production unit prices at home and abroad), the relational factors(lack of novelty), and the physical factors(loss of production infrastructure and network), while the human factors(skilled workforce), the production factors(delivery date and product quality), and the relational factors(timely communication and mutual trust) as reasons for continuing domestic sourcing. Additional code analysis of interview also supports this finding. On the other hand, there was also a subtle difference between buyers(brands) and suppliers(promotion agencies and processing plants), and buyers consider the exact delivery date critical so that they could see trend-sensitive women's knit wear on time, and suppliers took production costs, labor costs, and labor shortages, which are financial factors, more seriously. Research implications or Originality - This study provides a richer and more balanced view of existing literature, which has generally tended to introduce global sourcing across the clothing industry despite the existence of various diversity within the industry. In addition, through qualitative research, we introduce that the sewing industry is carried out according to complex factors, and by revealing and categorizing the determinants of global sourcing, we supplement the existing research on the clothing sewing industry centered on survey. On a practical note, this study introduces that there is a difference in view of domestic sourcing and global sourcing between buyers(brands) and suppliers(promotion agencies and sewing factories), suggesting practical implications for revitalizing networks and deriving win-win cooperation network models among members in the future.

Enhancing Throughput and Reducing Network Load in Central Bank Digital Currency Systems using Reinforcement Learning (강화학습 기반의 CBDC 처리량 및 네트워크 부하 문제 해결 기술)

  • Yeon Joo Lee;Hobin Jang;Sujung Jo;GyeHyun Jang;Geontae Noh;Ik Rae Jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.129-141
    • /
    • 2024
  • Amidst the acceleration of digital transformation across various sectors, the financial market is increasingly focusing on the development of digital and electronic payment methods, including currency. Among these, Central Bank Digital Currencies (CBDC) are emerging as future digital currencies that could replace physical cash. They are stable, not subject to value fluctuation, and can be exchanged one-to-one with existing physical currencies. Recently, both domestic and international efforts are underway in researching and developing CBDCs. However, current CBDC systems face scalability issues such as delays in processing large transactions, response times, and network congestion. To build a universal CBDC system, it is crucial to resolve these scalability issues, including the low throughput and network overload problems inherent in existing blockchain technologies. Therefore, this study proposes a solution based on reinforcement learning for handling large-scale data in a CBDC environment, aiming to improve throughput and reduce network congestion. The proposed technology can increase throughput by more than 64 times and reduce network congestion by over 20% compared to existing systems.

Feasibility of Emotional Freedom Techniques in Patients with Posttraumatic Stress Disorder: a pilot study

  • Yujin Choi;Yunna Kim;Do-Hyung Kwon;Sunyoung Choi;Young-Eun Choi;Eun Kyoung Ahn;Seung-Hun Cho;Hyungjun Kim
    • Journal of Pharmacopuncture
    • /
    • v.27 no.1
    • /
    • pp.27-37
    • /
    • 2024
  • Objectives: Posttraumatic stress disorder (PTSD) is a prevalent mental health condition, and techniques using sensory stimulation in processing traumatic memories have gained attention. The Emotional Freedom Techniques (EFT) is a psychotherapy that combines tapping on acupoints with exposure to cognitive reframing. This pilot study aimed to assess the feasibility of EFT as a treatment for PTSD by answering the following research questions: 1) What is the compliance and completion rate of patients with PTSD with regard to EFT protocol? Is the dropout rate reasonable? 2) Is the effect size of EFT protocol for PTSD sufficient to justify a future trial? Methods: Thirty participants diagnosed with PTSD were recruited. They received weekly EFT sessions for five weeks, in which they repeated a statement acknowledging the problem and accepting themselves while tapping the SI3 acupoint on the side of their hand. PTSD symptoms were evaluated using the PTSD Checklist for DSM-5 (PCL-5) before and after the intervention. Results: Of the 30 PTSD patients (mean age: 34.1 ± 9.1, 80% female), 96.7% showed over 80% compliance to the EFT sessions, and 86.7% completed the entire study process. The mean PCL-5 total score decreased significantly after the intervention, with a large effect size (change from baseline: -14.33 [95% CI: -19.79, -8.86], p < 0.0001, d = 1.06). Conclusion: The study suggests that EFT is a feasible treatment for PTSD, with high session compliance and low dropout rates. The effect size observed in this study supports the need for a larger trial in the future to further investigate EFT as a treatment for PTSD. However, the lack of a control group and the use of a self-rated questionnaire for PTSD symptoms are limitations of this study. The findings of this pilot study can be used to plan a future trial.

Simulation analysis and evaluation of decontamination effect of different abrasive jet process parameters on radioactively contaminated metal

  • Lin Zhong;Jian Deng;Zhe-wen Zuo;Can-yu Huang;Bo Chen;Lin Lei;Ze-yong Lei;Jie-heng Lei;Mu Zhao;Yun-fei Hua
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.3940-3955
    • /
    • 2023
  • A new method of numerical simulating prediction and decontamination effect evaluation for abrasive jet decontamination to radioactively contaminated metal is proposed. Based on the Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) coupled simulation model, the motion patterns and distribution of abrasives can be predicted, and the decontamination effect can be evaluated by image processing and recognition technology. The impact of three key parameters (impact distance, inlet pressure, abrasive mass flow rate) on the decontamination effect is revealed. Moreover, here are experiments of reliability verification to decontamination effect and numerical simulation methods that has been conducted. The results show that: 60Co and other homogeneous solid solution radioactive pollutants can be removed by abrasive jet, and the average removal rate of Co exceeds 80%. It is reliable for the proposed numerical simulation and evaluation method because of the well goodness of fit between predicted value and actual values: The predicted values and actual values of the abrasive distribution diameter are Ф57 and Ф55; the total coverage rate is 26.42% and 23.50%; the average impact velocity is 81.73 m/s and 78.00 m/s. Further analysis shows that the impact distance has a significant impact on the distribution of abrasive particles on the target surface, the coverage rate of the core area increases at first, and then decreases with the increase of the impact distance of the nozzle, which reach a maximum of 14.44% at 300 mm. It is recommended to set the impact distance around 300 mm, because at this time the core area coverage of the abrasive is the largest and the impact velocity is stable at the highest speed of 81.94 m/s. The impact of the nozzle inlet pressure on the decontamination effect mainly affects the impact kinetic energy of the abrasive and has little impact on the distribution. The greater the inlet pressure, the greater the impact kinetic energy, and the stronger the decontamination ability of the abrasive. But in return, the energy consumption is higher, too. For the decontamination of radioactively contaminated metals, it is recommended to set the inlet pressure of the nozzle at around 0.6 MPa. Because most of the Co elements can be removed under this pressure. Increasing the mass and flow of abrasives appropriately can enhance the decontamination effectiveness. The total mass of abrasives per unit decontamination area is suggested to be 50 g because the core area coverage rate of the abrasive is relatively large under this condition; and the nozzle wear extent is acceptable.

Digital Library Interface Research Based on EEG, Eye-Tracking, and Artificial Intelligence Technologies: Focusing on the Utilization of Implicit Relevance Feedback (뇌파, 시선추적 및 인공지능 기술에 기반한 디지털 도서관 인터페이스 연구: 암묵적 적합성 피드백 활용을 중심으로)

  • Hyun-Hee Kim;Yong-Ho Kim
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.1
    • /
    • pp.261-282
    • /
    • 2024
  • This study proposed and evaluated electroencephalography (EEG)-based and eye-tracking-based methods to determine relevance by utilizing users' implicit relevance feedback while navigating content in a digital library. For this, EEG/eye-tracking experiments were conducted on 32 participants using video, image, and text data. To assess the usefulness of the proposed methods, deep learning-based artificial intelligence (AI) techniques were used as a competitive benchmark. The evaluation results showed that EEG component-based methods (av_P600 and f_P3b components) demonstrated high classification accuracy in selecting relevant videos and images (faces/emotions). In contrast, AI-based methods, specifically object recognition and natural language processing, showed high classification accuracy for selecting images (objects) and texts (newspaper articles). Finally, guidelines for implementing a digital library interface based on EEG, eye-tracking, and artificial intelligence technologies have been proposed. Specifically, a system model based on implicit relevance feedback has been presented. Moreover, to enhance classification accuracy, methods suitable for each media type have been suggested, including EEG-based, eye-tracking-based, and AI-based approaches.

Study on Customer Satisfaction Performance Evaluation through e-SCM-based OMS Implementation (e-SCM 기반 OMS 구현을 통한 고객 만족 성과평가에 관한 연구)

  • Hyungdo Zun;ChiGon Kim;KyungBae Yoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.891-899
    • /
    • 2024
  • The Fourth Industrial Revolution is centered on a personalized demand fulfillment economy and is all about transformation and flexible processing that can deliver what customers want in real time across space and time. This paper implements the construction and operation of a packaging platform that can instantly procure the required packaging products based on real-time orders and evaluates its performance. The components of customer satisfaction are flexible and dependent on the situation which requires efficient management of enterprise operational processes based on an e-SCM platform. An OMS optimized for these conditions plays an important role in maximizing and differentiating the efficiency of a company's operations and improving its cost advantage. OMS is a system of mass customization that provides efficient MOT(Moment of Truth) logistics services to meet the eco-friendly issues of many individual customers and achieve optimized logistics operation goals to enhance repurchase intentions and sustainable business. OMS precisely analyzes the collected data to support information and decision-making related to efficiency, productivity, cost and provide accurate reports. It uses data visualization tools to express data visually and suggests directions for improvement of the operational process through statistics and prediction analysis.