• Title/Summary/Keyword: Model output

Search Result 4,595, Processing Time 0.034 seconds

Comparison of Integrated Health and Welfare Service Provision Projects Centered on Medical Institutions (의료기관 중심 보건의료·복지 통합 서비스 제공 사업 비교)

  • Su-Jin Lee;Jong-Yeon Kim
    • Journal of agricultural medicine and community health
    • /
    • v.49 no.2
    • /
    • pp.132-145
    • /
    • 2024
  • Objectives: This study compares cases of Dalgubeol Health Care Project, 301 Network Project, and 3 for 1 Project based on program logic models to derive measures for promoting integrated healthcare and welfare services centered around medical institutions. Methods: From January to December 2021, information on the implementation systems and performance of each institution was collected. Data sources included prior academic research, project reports, operational guidelines, official press releases, media articles, and written surveys from project managers. A program logic model analysis framework was applied, structuring the information based on four elements: situation, input, activity, and output. Results: All three projects aimed to address the fragmentation of health and welfare services and medical blind spots. Despite similar multidisciplinary team compositions, differences existed in specific fields, recruitment scale, and employment types. Variations in funding sources led to differences in community collaboration, support methods, and future directions. There were discrepancies in the number of beneficiaries and medical treatments, with different results observed when comparing the actual number of people to input manpower and project cost per beneficiary. Conclusions: To design an integrated health and welfare service provision system centered on medical institutions, securing a stable funding mechanism and establishing an appropriate target population and service delivery system are crucial. Additionally, installing a dedicated department within the medical institution to link activities across various sectors, rather than outsourcing, is necessary. Ensuring appropriate recruitment and stable employment systems is needed. A comprehensive provision system offering services from mild to severe cases through public-private cooperation is suggested.

Enhancing Empathic Reasoning of Large Language Models Based on Psychotherapy Models for AI-assisted Social Support (인공지능 기반 사회적 지지를 위한 대형언어모형의 공감적 추론 향상: 심리치료 모형을 중심으로)

  • Yoon Kyung Lee;Inju Lee;Minjung Shin;Seoyeon Bae;Sowon Hahn
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.23-48
    • /
    • 2024
  • Building human-aligned artificial intelligence (AI) for social support remains challenging despite the advancement of Large Language Models. We present a novel method, the Chain of Empathy (CoE) prompting, that utilizes insights from psychotherapy to induce LLMs to reason about human emotional states. This method is inspired by various psychotherapy approaches-Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy (RT)-each leading to different patterns of interpreting clients' mental states. LLMs without CoE reasoning generated predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathic responses aligned with each psychotherapy model's different reasoning patterns. For empathic expression classification, the CBT-based CoE resulted in the most balanced classification of empathic expression labels and the text generation of empathic responses. However, regarding emotion reasoning, other approaches like DBT and PCT showed higher performance in emotion reaction classification. We further conducted qualitative analysis and alignment scoring of each prompt-generated output. The findings underscore the importance of understanding the emotional context and how it affects human-AI communication. Our research contributes to understanding how psychotherapy models can be incorporated into LLMs, facilitating the development of context-aware, safe, and empathically responsive AI.

Nondestructive Quantification of Corrosion in Cu Interconnects Using Smith Charts (스미스 차트를 이용한 구리 인터커텍트의 비파괴적 부식도 평가)

  • Minkyu Kang;Namgyeong Kim;Hyunwoo Nam;Tae Yeob Kang
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.28-35
    • /
    • 2024
  • Corrosion inside electronic packages significantly impacts the system performance and reliability, necessitating non-destructive diagnostic techniques for system health management. This study aims to present a non-destructive method for assessing corrosion in copper interconnects using the Smith chart, a tool that integrates the magnitude and phase of complex impedance for visualization. For the experiment, specimens simulating copper transmission lines were subjected to temperature and humidity cycles according to the MIL-STD-810G standard to induce corrosion. The corrosion level of the specimen was quantitatively assessed and labeled based on color changes in the R channel. S-parameters and Smith charts with progressing corrosion stages showed unique patterns corresponding to five levels of corrosion, confirming the effectiveness of the Smith chart as a tool for corrosion assessment. Furthermore, by employing data augmentation, 4,444 Smith charts representing various corrosion levels were obtained, and artificial intelligence models were trained to output the corrosion stages of copper interconnects based on the input Smith charts. Among image classification-specialized CNN and Transformer models, the ConvNeXt model achieved the highest diagnostic performance with an accuracy of 89.4%. When diagnosing the corrosion using the Smith chart, it is possible to perform a non-destructive evaluation using electronic signals. Additionally, by integrating and visualizing signal magnitude and phase information, it is expected to perform an intuitive and noise-robust diagnosis.

Analysis of the Efficiency of Entrepreneurship Support in Korean Universities (국내 대학의 창업지원 효율성 분석)

  • Heung-Hee Kim;Dae-Geun Kim
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.4
    • /
    • pp.87-101
    • /
    • 2024
  • This study aims to provide insights for the efficient utilization of resources by analyzing the entrepreneurship support efficiency of Korean universities. To identify the factors influencing the number of entrepreneurs, which is the primary goal of university entrepreneurship support, a multiple regression analysis was conducted, identifying five effective independent variables. Using these five identified independent variables as input variables and the number of entrepreneurs as the output variable, the DEA method was used to analyze the efficiency of entrepreneurship support for each university as of 2023. The analysis of 150 four-year universities in Korea showed that nine universities exhibited complete efficiency in both CCR and BCC models. Among the remaining 141 universities that showed inefficiency, the cause was scale for five universities, technology for two universities, and both scale and technology for 134 universities. Regarding the returns to scale, nine universities exhibited CRS, 79 exhibited IRS, and 62 exhibited DRS. Additionally, reference groups that could serve as benchmarks for improving the efficiency of inefficient universities were identified, and target values(projections) for each variable to achieve efficiency were also presented. Despite the limitations of the DEA model, this study helps each university identify the causes of inefficiency in their entrepreneurship support and derive specific improvements to enhance efficiency. This facilitates more efficient resource management and can positively impact the ultimate goals of university entrepreneurship support, such as regional economic development and job creation.

  • PDF

Simulink-based xPC Target Monitoring/Logging Tool Development (시뮬링크 기반의 실시간 모니터링 및 로깅 도구 개발)

  • Yoonbin Hong;Minji Park;Donghyeok An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.5
    • /
    • pp.339-350
    • /
    • 2024
  • In construction sites, the engine of heavy machinery is tested by practitioners who manually adjust engine settings and directly measure the output. This process has consistently raised concerns regarding time costs and the risk of incidents. To address these issues, simulations of heavy equipment are conducted using Speedgoat and the Simulink API. However, due to the varying compatibility of different versions of Speedgoat hardware and Simulink API, engineers need to have a comprehensive understanding of various Simulink APIs. It is practically challenging for engineers, who must have a deep understanding of heavy equipment structures, to also possess programming skills including API usage. Thus, this paper proposes a tool that allows inputting configuration values for heavy equipment simulation and visually outputs and logs the simulation results. The proposed tool provides functionalities to deliver configuration values, such as engine settings of heavy equipment, to the simulator model and to monitor and log the resulting simulation outputs. These functionalities have been validated through scenarios. By using the developed tool, engineers are expected to reduce the burden of learning Simulink API and focus more on understanding the structure of heavy equipment. Additionally, it is anticipated that this tool will provide a more efficient and safer working environment for heavy equipment testing on construction sites.

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

A Coupled-ART Neural Network Capable of Modularized Categorization of Patterns (복합 특징의 분리 처리를 위한 모듈화된 Coupled-ART 신경회로망)

  • 우용태;이남일;안광선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.10
    • /
    • pp.2028-2042
    • /
    • 1994
  • Properly defining signal and noise in a self-organizing system like ART(Adaptive Resonance Theory) neural network model raises a number of subtle issues. Pattern context must enter the definition so that input features, treated as irrelevant noise when they are embedded in a given input pattern, may be treated as informative signals when they are embedded in a different input pattern. The ATR automatically self-scales their computational units to embody context and learning dependent definitions of a signal and noise and there is no problem in categorizing input pattern that have features similar in nature. However, when we have imput patterns that have features that are different in size and nature, the use of only one vigilance parameter is not enough to differentiate a signal from noise for a good categorization. For example, if the value fo vigilance parameter is large, then noise may be processed as an informative signal and unnecessary categories are generated: and if the value of vigilance parameter is small, an informative signal may be ignored and treated as noise. Hence it is no easy to achieve a good pattern categorization. To overcome such problems, a Coupled-ART neural network capable of modularized categorization of patterns is proposed. The Coupled-ART has two layer of tightly coupled modules. the upper and the lower. The lower layer processes the global features of a pattern and the structural features, separately in parallel. The upper layer combines the categorized outputs from the lower layer and categorizes the combined output, Hence, due to the modularized categorization of patterns, the Coupled-ART classifies patterns more efficiently than the ART1 model.

  • PDF

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

Simulation of Pension Finance and Its Economic Effects (연금재정(年金財政) 시뮬레이션과 경제적(經濟的) 파급효과(波及效果))

  • Min, Jae-sung;Kim, Yong-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.115-134
    • /
    • 1991
  • The role of pension plans in the macroeconomy has been a subject of much interest for some years. It has come to be recognized that pension plans may alter basic macroeconomic behavior patterns. The net effects on both savings and labor supply are thus matters for speculation. The aim of the present paper is to provide quantitative results which may be helpful in attaching orders of magnitude to some of the possible effects. We are not concerned with the providing empirical evidence relating to actual behavior, but rather with deriving the macroeconomic implications for a alternative possibilities. The pension plan interacts with the economy and the population in a number of ways. Demographic variables may thus affect both the economic burden of a national pension plan and the ability of the economy to sustain the burden. The tax transfer process associated with the pension plan may have implications for national patterns of saving and consumption. The existence of a pension plan may have implications also for the size of the labor force, inasmuch as labor force participation rates may be affected. Changes in technology and the associated changes in average productivity levels bear directly on the size of the national income, and hence on the pension contribution base. The vehicle for the analysis is a hypothetical but broadly realistic simulation model of an economic- demographic system into which is inserted a national pension plan. All income, expenditure, and related aggregates are in real terms. The economy is basically neoclassical; full employment is assumed, output is generated by a Cobb-Douglas production process, and factors receive their marginal products. The model was designed for use in computer simulation experiments. The simulation results suggest a number of general conclusions. These may be summarized as follows; - The introduction of a national pension plan (funded system) tends to increase the rate of economic growth until cost exceeds revenue. - A scheme with full wage indexing is more expensive than one in which pensions are merely price indexed. - The rate of technical progress is not a critical element in determining the economic burden of the pension scheme. - Raising the rate of benefits affects its economic burden, and raising the age of eligibility may decrease the burden substantially. - The level of fertility is an element in determining the long-run burden. A sustained low fertility rate increases the proportion of the aged in total population and increases the burden of the pension plan. High fertility has inverse effects.

  • PDF