• Title/Summary/Keyword: 생성형 모델

Search Result 811, Processing Time 0.027 seconds

Database for Hospice Nursing in Electronic Medical Record (호스피스 전자기록을 위한 데이터베이스 개발)

  • Kim, Young-Soon;Lee, Chang-Geol;Lee, Kyoung-Ok;Kim, Ok-Kyum;Kim, In-Hye;Kim, Mi-Jeong;Hwang, Ae-Ran;Lee, Won-Hee
    • Journal of Hospice and Palliative Care
    • /
    • v.7 no.2
    • /
    • pp.200-213
    • /
    • 2004
  • Purpose: The purpose of this study was to create an electronic nursing record form to build a hospice nursing process database to be used in the u-hospital EMR system. Specific aims of the study were: 1. To generate a complete, accurate, and simple electronic nursing record form. 2. To verify its appropriateness following documentation with the standardized hospice protocol. 3. To verify its validity and finalize the hospice nursing process database through discussion among hospice professionals. Methods: Nursing records from three independent hospice organizations were collected and analyzed by five expert hospice nurses with more than 10 years of experience, and a nursing record database was developed. This database was applied to 81 hospice patients at three hospice organizations to verify its completeness. Results: 1. An electronic nursing record form with completeness, accuracy, and simplicity was developed. 2. The completeness of the standardized home hospice service protocol was 95.86 percent. 3. The hospice nursing process database contains 18 items on health problems, 79 items on related causes and major symptoms, and 229 items on nursing interventions. Conclusion: The new nursing record form and database will reduce documentation time and articulate and streamline the working process among team members. They can also improve the quality of hospice services, and ultimately enable us to estimate hospice service costs.

  • PDF

An Estimation of Concentration of Asian Dust (PM10) Using WRF-SMOKE-CMAQ (MADRID) During Springtime in the Korean Peninsula (WRF-SMOKE-CMAQ(MADRID)을 이용한 한반도 봄철 황사(PM10)의 농도 추정)

  • Moon, Yun-Seob;Lim, Yun-Kyu;Lee, Kang-Yeol
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.276-293
    • /
    • 2011
  • In this study a modeling system consisting of Weather Research and Forecasting (WRF), Sparse Matrix Operator Kernel Emissions (SMOKE), the Community Multiscale Air Quality (CMAQ) model, and the CMAQ-Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) model has been applied to estimate enhancements of $PM_{10}$ during Asian dust events in Korea. In particular, 5 experimental formulas were applied to the WRF-SMOKE-CMAQ (MADRID) model to estimate Asian dust emissions from source locations for major Asian dust events in China and Mongolia: the US Environmental Protection Agency (EPA) model, the Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model, and the Dust Entrainment and Deposition (DEAD) model, as well as formulas by Park and In (2003), and Wang et al. (2000). According to the weather map, backward trajectory and satellite image analyses, Asian dust is generated by a strong downwind associated with the upper trough from a stagnation wave due to development of the upper jet stream, and transport of Asian dust to Korea shows up behind a surface front related to the cut-off low (known as comma type cloud) in satellite images. In the WRF-SMOKE-CMAQ modeling to estimate the PM10 concentration, Wang et al.'s experimental formula was depicted well in the temporal and spatial distribution of Asian dusts, and the GOCART model was low in mean bias errors and root mean square errors. Also, in the vertical profile analysis of Asian dusts using Wang et al's experimental formula, strong Asian dust with a concentration of more than $800\;{\mu}g/m^3$ for the period of March 31 to April 1, 2007 was transported under the boundary layer (about 1 km high), and weak Asian dust with a concentration of less than $400\;{\mu}g/m^3$ for the period of 16-17 March 2009 was transported above the boundary layer (about 1-3 km high). Furthermore, the difference between the CMAQ model and the CMAQ-MADRID model for the period of March 31 to April 1, 2007, in terms of PM10 concentration, was seen to be large in the East Asia area: the CMAQ-MADRID model showed the concentration to be about $25\;{\mu}g/m^3$ higher than the CMAQ model. In addition, the $PM_{10}$ concentration removed by the cloud liquid phase mechanism within the CMAQ-MADRID model was shown in the maximum $15\;{\mu}g/m^3$ in the Eastern Asia area.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Analysis of the scholastic capability of ChatGPT utilizing the Korean College Scholastic Ability Test (대학입시 수능시험을 평가 도구로 적용한 ChatGPT의 학업 능력 분석)

  • WEN HUILIN;Kim Jinhyuk;Han Kyonghee;Kim Shiho
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.72-83
    • /
    • 2023
  • ChatGPT, commercial launch in late 2022, has shown successful results in various professional exams, including US Bar Exam and the United States Medical Licensing Exam (USMLE), demonstrating its ability to pass qualifying exams in professional domains. However, further experimentation and analysis are required to assess ChatGPT's scholastic capability, such as logical inference and problem-solving skills. This study evaluated ChatGPT's scholastic performance utilizing the Korean College Scholastic Ability Test (KCSAT) subjects, including Korean, English, and Mathematics. The experimental results revealed that ChatGPT achieved a relatively high accuracy rate of 69% in the English exam but relatively lower rates of 34% and 19% in the Korean Language and Mathematics domains, respectively. Through analyzing the results of the Korean language exam, English exams, and TOPIK II, we evaluated ChatGPT's strengths and weaknesses in comprehension and logical inference abilities. Although ChatGPT, as a generative language model, can understand and respond to general Korean, English, and Mathematics problems, it is considered weak in tasks involving higher-level logical inference and complex mathematical problem-solving. This study might provide simple yet accurate and effective evaluation criteria for generative artificial intelligence performance assessment through the analysis of KCSAT scores.

  • PDF

Short-Term Prediction of Vehicle Speed on Main City Roads using the k-Nearest Neighbor Algorithm (k-Nearest Neighbor 알고리즘을 이용한 도심 내 주요 도로 구간의 교통속도 단기 예측 방법)

  • Rasyidi, Mohammad Arif;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.121-131
    • /
    • 2014
  • Traffic speed is an important measure in transportation. It can be employed for various purposes, including traffic congestion detection, travel time estimation, and road design. Consequently, accurate speed prediction is essential in the development of intelligent transportation systems. In this paper, we present an analysis and speed prediction of a certain road section in Busan, South Korea. In previous works, only historical data of the target link are used for prediction. Here, we extract features from real traffic data by considering the neighboring links. After obtaining the candidate features, linear regression, model tree, and k-nearest neighbor (k-NN) are employed for both feature selection and speed prediction. The experiment results show that k-NN outperforms model tree and linear regression for the given dataset. Compared to the other predictors, k-NN significantly reduces the error measures that we use, including mean absolute percentage error (MAPE) and root mean square error (RMSE).

Fat Client-Based Abstraction Model of Unstructured Data for Context-Aware Service in Edge Computing Environment (에지 컴퓨팅 환경에서의 상황인지 서비스를 위한 팻 클라이언트 기반 비정형 데이터 추상화 방법)

  • Kim, Do Hyung;Mun, Jong Hyeok;Park, Yoo Sang;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.59-70
    • /
    • 2021
  • With the recent advancements in the Internet of Things, context-aware system that provides customized services become important to consider. The existing context-aware systems analyze data generated around the user and abstract the context information that expresses the state of situations. However, these datasets is mostly unstructured and have difficulty in processing with simple approaches. Therefore, providing context-aware services using the datasets should be managed in simplified method. One of examples that should be considered as the unstructured datasets is a deep learning application. Processes in deep learning applications have a strong coupling in a way of abstracting dataset from the acquisition to analysis phases, it has less flexible when the target analysis model or applications are modified in functional scalability. Therefore, an abstraction model that separates the phases and process the unstructured dataset for analysis is proposed. The proposed abstraction utilizes a description name Analysis Model Description Language(AMDL) to deploy the analysis phases by each fat client is a specifically designed instance for resource-oriented tasks in edge computing environments how to handle different analysis applications and its factors using the AMDL and Fat client profiles. The experiment shows functional scalability through examples of AMDL and Fat client profiles targeting a vehicle image recognition model for vehicle access control notification service, and conducts process-by-process monitoring for collection-preprocessing-analysis of unstructured data.

Spatial Gap-filling of GK-2A/AMI Hourly AOD Products Using Meteorological Data and Machine Learning (기상모델자료와 기계학습을 이용한 GK-2A/AMI Hourly AOD 산출물의 결측화소 복원)

  • Youn, Youjeong;Kang, Jonggu;Kim, Geunah;Park, Ganghyun;Choi, Soyeon;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.953-966
    • /
    • 2022
  • Since aerosols adversely affect human health, such as deteriorating air quality, quantitative observation of the distribution and characteristics of aerosols is essential. Recently, satellite-based Aerosol Optical Depth (AOD) data is used in various studies as periodic and quantitative information acquisition means on the global scale, but optical sensor-based satellite AOD images are missing in some areas with cloud conditions. In this study, we produced gap-free GeoKompsat 2A (GK-2A) Advanced Meteorological Imager (AMI) AOD hourly images after generating a Random Forest based gap-filling model using grid meteorological and geographic elements as input variables. The accuracy of the model is Mean Bias Error (MBE) of -0.002 and Root Mean Square Error (RMSE) of 0.145, which is higher than the target accuracy of the original data and considering that the target object is an atmospheric variable with Correlation Coefficient (CC) of 0.714, it is a model with sufficient explanatory power. The high temporal resolution of geostationary satellites is suitable for diurnal variation observation and is an important model for other research such as input for atmospheric correction, estimation of ground PM, analysis of small fires or pollutants.

A Study on the Prediction of Nitrogen Oxide Emissions in Rotary Kiln Process using Machine Learning (머신러닝 기법을 이용한 로터리 킬른 공정의 질소산화물 배출예측에 관한 연구)

  • Je-Hyeung Yoo;Cheong-Yeul Park;Jae Kwon Bae
    • Journal of Industrial Convergence
    • /
    • v.21 no.7
    • /
    • pp.19-27
    • /
    • 2023
  • As the secondary battery market expands, the process of producing laterite ore using the rotary kiln and electric furnace method is expanding worldwide. As ESG management expands, the management of air pollutants such as nitrogen oxides in exhaust gases is strengthened. The rotary kiln, one of the main facilities of the pyrometallurgy process, is a facility for drying and preliminary reduction of ore, and it generate nitrogen oxides, thus prediction of nitrogen oxide is important. In this study, LSTM for regression prediction and LightGBM for classification prediction were used to predict and then model optimization was performed using AutoML. When applying LSTM, the predicted value after 5 minutes was 0.86, MAE 5.13ppm, and after 40 minutes, the predicted value was 0.38 and MAE 10.84ppm. As a result of applying LightGBM for classification prediction, the test accuracy rose from 0.75 after 5 minutes to 0.61 after 40 minutes, to a level that can be used for actual operation, and as a result of model optimization through AutoML, the accuracy of the prediction after 5 minutes improved from 0.75 to 0.80 and from 0.61 to 0.70. Through this study, nitrogen oxide prediction values can be applied to actual operations to contribute to compliance with air pollutant emission regulations and ESG management.

Development of Simulation for Estimating Growth Changes of Locally Managed European Beech Forests in the Eifel Region of Germany (독일 아이펠의 지역적 관리에 따른 유럽너도밤나무 숲의 생장변화 추정을 위한 시뮬레이션 개발)

  • Jae-gyun Byun;Martina Ross-Nickoll;Richard Ottermanns
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.1
    • /
    • pp.1-17
    • /
    • 2024
  • Forest management is known to beneficially influence stand structure and wood production, yet quantitative understanding as well as an illustrative depiction of the effects of different management approaches on tree growth and stand dynamics are still scarce. Long-term management of beech forests must balance public interests with ecological aspects. Efficient forest management requires the reliable prediction of tree growth change. We aimed to develop a novel hybrid simulation approach, which realistically simulates short- as well as long-term effects of different forest management regimes commonly applied, but not limited, to German low mountain ranges, including near-natural forest management based on single-tree selection harvesting. The model basically consists of three modules for (a) natural seedling regeneration, (b) mortality adjustment, and (c) tree growth simulation. In our approach, an existing validated growth model was used to calculate single year tree growth, and expanded on by including in a newly developed simulation process using calibrated modules based on practical experience in forest management and advice from the local forest. We included the following different beech forest-management scenarios that are representative for German low mountain ranges to our simulation tool: (1) plantation, (2) continuous cover forestry, and (3) reserved forest. The simulation results show a robust consistency with expert knowledge as well as a great comparability with mid-term monitoring data, indicating a strong model performance. We successfully developed a hybrid simulation that realistically reflects different management strategies and tree growth in low mountain range. This study represents a basis for a new model calibration method, which has translational potential for further studies to develop reliable tailor-made models adjusted to local situations in beech forest management.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.