• Title/Summary/Keyword: 개별지원

Search Result 855, Processing Time 0.024 seconds

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

A Study on Improvements on Legal Structure on Security of National Research and Development Projects (과학기술 및 학술 연구보고서 서비스 제공을 위한 국가연구개발사업 관련 법령 입법론 -저작권법상 공공저작물의 자유이용 제도와 연계를 중심으로-)

  • Kang, Sun Joon;Won, Yoo Hyung;Choi, San;Kim, Jun Huck;Kim, Seul Ki
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2015.05a
    • /
    • pp.545-570
    • /
    • 2015
  • Korea is among the ten countries with the largest R&D budget and the highest R&D investment-to-GDP ratio, yet the subject of security and protection of R&D results remains relatively unexplored in the country. Countries have implemented in their legal systems measures to properly protect cutting-edge industrial technologies that would adversely affect national security and economy if leaked to other countries. While Korea has a generally stable legal framework as provided in the Regulation on the National R&D Program Management (the "Regulation") and the Act on Industrial Technology Protection, many difficulties follow in practice when determining details on security management and obligations and setting standards in carrying out national R&D projects. This paper proposes to modify and improve security level classification standards in the Regulation. The Regulation provides a dual security level decision-making system for R&D projects: the security level can be determined either by researcher or by the central agency in charge of the project. Unification of such a dual system can avoid unnecessary confusions. To prevent a leakage, it is crucial that research projects be carried out in compliance with their assigned security levels and standards and results be effectively managed. The paper examines from a practitioner's perspective relevant legal provisions on leakage of confidential R&D projects, infringement, injunction, punishment, attempt and conspiracy, dual liability, duty of report to the National Intelligence Service (the "NIS") of security management process and other security issues arising from national R&D projects, and manual drafting in case of a breach. The paper recommends to train security and technological experts such as industrial security experts to properly amend laws on security level classification standards and relevant technological contents. A quarterly policy development committee must also be set up by the NIS in cooperation with relevant organizations. The committee shall provide a project management manual that provides step-by-step guidance for organizations that carry out national R&D projects as a preventive measure against possible leakage. In the short term, the NIS National Industrial Security Center's duties should be expanded to incorporate national R&D projects' security. In the long term, a security task force must be set up to protect, support and manage the projects whose responsibilities should include research, policy development, PR and training of security-related issues. Through these means, a social consensus must be reached on the need for protecting national R&D projects. The most efficient way to implement these measures is to facilitate security training programs and meetings that provide opportunities for communication among industrial security experts and researchers. Furthermore, the Regulation's security provisions must be examined and improved.

  • PDF

Assessment of Nutrient Intakes of Lunch Meals for the Aged Customers at the Elderly Care Facilities Through Measuring Cooking Yield Factor and the Weighed Plate Waste (조리 중량 변화 계수 및 잔반계측법을 이용한 노인복지시설 이용자의 점심식사 영양섭취평가)

  • Chang, Hye-Ja;Yi, Na-Young;Kim, Tae-Hee
    • Journal of Nutrition and Health
    • /
    • v.42 no.7
    • /
    • pp.650-663
    • /
    • 2009
  • The purposes of this study were to investigate one portion size of menus served and to evaluate nutrient intake of lunch at three elderly care facility food services located in Seoul. A weighed plate method was employed to measure plate wastes and consumption of the menus served. Yield factors were calculated from cooking experiments based on standardized recipes, and were used to evaluate nutrient intake. One hundred elderly participated in this study for measuring plate waste and were asked to complete questionnaire. Nutrient analyses for the served and consumed meal were performed using CAN program. The yield factors of rice dishes after cooking are 2.4 regardless of rice dish types, 1.58 for thick soups, 0.60 to 0.70 for meat dishes, and 1.0 to 1.25 branched vegetable. Average consumption quantity of dishes were 235.97 g for rice, 248.53 g for soup, 72.83 g for meat dishes, 39.80 g for vegetables and 28.36 g for Kimchi. On average the food waste rate is 14.0%, indicating the second highest plate waste percentage of Kimchi (26.2%), and meat/fish dish (17.3%). The evaluation results of NAR (Nutrition Adequacy Ratio) showed that iron (0.12), calcium (0.64), riboflavin (0.80), and folic acid (0.97) were less than 1.0 in both male and female elderly groups, indicating significant differences of NAR among three facilities. Compared to the 1/3 Dietary Reference Intake (DRIs) for the elderly groups, nutrient intake analysis demonstrated that calcium (100%) and iron (100%), followed by riboflavin, vitamin A, and Vitamin B6 did not met of the 1/3 EAR (Estimated Average Requirement). For the nutritious meal management, a professional dietitian should be placed at the elderly care center to develop standardized recipes in consideration of yield factors and the elderly's health and nutrition status.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.